Next Article in Journal
PSO-Guided Construction of MRD Codes for Rank Metrics
Previous Article in Journal
Correction: Jung et al. GazeMap: Dual-Pathway CNN Approach for Diagnosing Alzheimer’s Disease from Gaze and Head Movements. Mathematics 2025, 13, 1867
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

KANJDP: Interpretable Temporal Point Process Modeling with Kolmogorov–Arnold Representation

1
College of Intelligence Science and Technology, National University of Defense Technology, No. 109 Deya Street, Changsha 410073, China
2
National Innovative Institute of Defense Technology, Academy of Military Sciences, No. 53, Dongdajie Street, Beijing 100071, China
*
Authors to whom correspondence should be addressed.
Mathematics 2025, 13(17), 2754; https://doi.org/10.3390/math13172754
Submission received: 24 July 2025 / Revised: 22 August 2025 / Accepted: 23 August 2025 / Published: 27 August 2025

Abstract

Accurate modeling of event sequences is valuable in domains like electronic health records, financial risk management, and social networks. Random time intervals in these sequences contain key dynamic information, and temporal point processes (TPPs) are widely used to analyze event triggering mechanisms and probability evolution patterns in asynchronous sequences. Neural TPPs (NTPPs) enhanced by deep learning improve modeling capabilities, but most suffer from limited interpretability due to predefined functional structures. This study proposes KANJDP (Kolmogorov–Arnold Neural Jump-Diffusion Process), a novel event sequence modeling method: it decomposes the intensity function via stochastic differential equations (SDEs), with each component parameterized by learnable spline functions. By analyzing each component’s contribution to event occurrence, KANJDP quantitatively reveals core event generation mechanisms. Experiments on real-world and synthetic datasets show that KANJDP achieves higher prediction accuracy with fewer trainable parameters.

1. Introduction

Understanding and modeling the temporal dynamics of discrete events is a fundamental problem in many domains, such as healthcare [1], finance [2], and social systems [3]. In real-world scenarios, events often occur at irregular time intervals, forming asynchronous event sequences. The time gaps between events contain valuable dynamic information, implicitly reflecting the underlying triggering mechanisms and the temporal evolution of event occurrence probabilities. Effectively capturing these dynamics is crucial for improving predictive performance and gaining deeper insights into the underlying generative processes.
Temporal point processes (TPPs) are widely used frameworks for modeling such sequences, where the event dynamics are governed by an intensity function defined over continuous time. With the development of deep learning, NTPPs have emerged as powerful models that leverage neural networks to significantly enhance the expressive capacity of traditional TPPs.
Despite their modeling power, existing NTPPs face several critical limitations. A major concern lies in their limited interpretability. These models often employ complex architectures such as RNNs [4], Transformers [5], or MLPs to parameterize the intensity function, making them essentially “black-box” systems. As a result, it is difficult to attribute the cause of an event to specific prior influences, such as preceding events, long-term trends, or cyclical fluctuations, which restricts the transparency and trustworthiness of the predictions.
Another issue is the lack of representational flexibility. Many NTPPs rely on fixed functional assumptions—such as exponential decay or additive/multiplicative combinations—which constrain their ability to capture rich, nonlinear, and context-dependent event interactions. However, real-world sequences often exhibit non-stationary and irregular patterns that cannot be adequately modeled using these rigid structures.
Moreover, the training process is computationally intensive and potentially unstable. Estimating the likelihood over continuous time typically involves solving complex integrals, often requiring numerical approximation methods such as Monte Carlo sampling. This not only slows down convergence but also hampers performance on long or high-frequency sequences.
To address these limitations, we propose KANJDP, a novel method for modeling asynchronous event sequences with improved interpretability and expressiveness. KANJDP decomposes the event intensity function using a jump-diffusion SDE [6,7] framework into three components: drift, diffusion and jump [8]. The drift term captures long-term trends, the diffusion term models random fluctuations, and the jump term reflects discrete interactions between events. This decomposition enables KANJDP to flexibly adapt to non-stationary and bursty event sequences.
To improve training efficiency and reduce the number of trainable parameters, we employ an enhanced version of Kolmogorov–Arnold Networks (KAN) [9] instead of traditional multi-layer perceptrons (MLPs) to model each component. Building upon the efficient FastKAN [10] implementation that replaces computationally intensive B-splines with Gaussian radial basis functions (RBFs), we introduce three crucial architectural innovations to address the unique challenges of TPP modeling.
Our enhanced KAN architecture incorporates several key improvements over the standard FastKAN design. We make the basis functions fully adaptive by treating both the centers and bandwidths of the Gaussian RBFs as learnable parameters during training, allowing the network to autonomously adjust the location and shape of activation functions according to the specific characteristics of the input data distribution. To enhance feature selection capabilities, we integrate a dimensional attention mechanism into each KAN layer that calculates weight vectors to dynamically focus on the most salient features for predicting drift, diffusion, and jump components. Additionally, we apply layer normalization before each KAN layer to improve training stability and ensure input values remain within the effective support of the RBF kernels—a stabilization particularly crucial for TPP modeling, where the likelihood objective incentivizes learning extremely “spiky” intensity functions that can lead to gradient instability in standard implementations. These architectural enhancements collectively enable superior modeling precision and flexibility while maintaining stable training convergence, allowing for function-level decomposition that provides deeper understanding of the internal model behavior.
By analyzing the individual contribution of each component to the overall intensity, KANJDP provides interpretable and quantitative insights into the underlying event generation mechanisms while maintaining stable training convergence.
Our contributions can be summarized as follows:
  • Enhanced KAN Architecture. Critical architectural improvements to FastKAN including learnable basis functions, dimensional attention, and layer normalization that ensure stable training and superior expressiveness for TPP modeling.
  • Novel KANJDP Framework. A pioneering NTPP model based on jump-diffusion SDEs that decomposes event intensity into interpretable drift, diffusion, and jump components, providing functional-level attribution and quantitative insights into underlying event generation mechanisms.
  • Superior Performance. Demonstrates state-of-the-art results on both real-world and synthetic datasets while using fewer trainable parameters and achieving stable training convergence.

2. Related Work

2.1. Event Prediction Methods

TPP models are central to event sequence prediction. Classical parametric models, such as the Poisson process and the Hawkes process, form the bedrock of this field. The Poisson process, however, assumes event independence, a premise often too simplistic for real-world systems where events are interdependent [11]. The Hawkes process addresses this by introducing a self-excitation mechanism, where past events intensify the probability of future occurrences, making it highly effective for modeling phenomena like aftershock sequences in seismology and volatility clustering in finance [12]. Despite their utility, these models are constrained by rigid parametric forms, limiting their capacity to capture complex, nonlinear, or long-range dependencies inherent in many event dynamics.
To overcome these limitations, recent research has gravitated towards NTPPs, which employ deep learning to model the conditional intensity function with greater flexibility. Early NTPPs utilized RNNs to encode the event history, as seen in models like RMTPP (recurrent marked temporal point process) [4]. While innovative, these models inherited the difficulties of RNNs in capturing long-term dependencies. Subsequent architectural advancements sought to address these issues. The introduction of continuous-time LSTMs offered a more robust memory mechanism [13], while the shift to attention-based Transformer architectures, as proposed in the Transformer Hawkes process (THP) and self-attentive Hawkes process (SAHP), provided significant advantages [5,14]. Notably, Transformer-based models like the attentive neural Hawkes process (A-NHP) allow for greater parallelization and direct access to the entire event history, avoiding the information bottleneck of recurrent approaches [15,16].
Despite marked improvements in predictive accuracy, a primary challenge for most NTPPs remains their “black-box” nature, which hinders interpretability. This is a critical drawback in high-stakes domains like healthcare, where understanding the rationale behind a prediction is as vital as the prediction itself [17,18]. Several research avenues have emerged to address this. One direction focuses on designing inherently interpretable models. For instance, the hybrid-rule temporal point processes (HRTPP) framework explicitly integrates discoverable temporal logic rules with numerical features, creating a hybrid model that balances interpretability with predictive power [17]. Another approach, designed for sequences without meaningful timestamps, involves Summary Markov Models (SuMMs), which learn a minimal “influencing set” of event types for a target event, thereby revealing interpretable dynamics from the data [19].
A more recent frontier in event prediction involves augmenting sequence models with the reasoning capabilities of Large Language Models (LLMs). This paradigm acknowledges that real-world events are often accompanied by rich textual information that traditional TPPs cannot leverage. The LAMP framework, for example, pioneers this approach by using an LLM to perform abductive reasoning [18]. In this setup, a base event model first proposes candidate future events. Then, guided by few-shot demonstrations, an LLM suggests plausible causes for each proposal. These causes are used to retrieve supporting evidence from the actual event history, and a final ranking model scores the proposals based on the strength of their evidence [18]. This method demonstrates a significant performance leap by integrating high-level reasoning with statistical sequence modeling.

2.2. KAN-Related Work

KANs [9] represent a significant innovation in neural network architectures. Based on the Kolmogorov–Arnold representation theorem, KANs differ from traditional networks by placing learnable activation functions directly on the edges of the network, rather than at the nodes. This design allows for a better approximation of complex functions, often with fewer parameters, and enhances the model’s interpretability [20,21].
Although KANs show promise, their practical application faces a primary challenge: the high computational complexity stemming from the B-spline basis functions [22]. To mitigate this, several works have proposed replacing B-splines with more computationally efficient alternatives. For instance, Yang et al. [22] introduced rational functions with a dedicated CUDA implementation to better leverage modern GPU architectures. Other explorations include using Chebyshev polynomials [23], wavelets [21], and radial basis functions (RBFs) [24] as alternative basis functions.
In the realm of computer vision, the initial concept of Convolutional KANs (CKANs) was introduced by Bodner et al. [25], who replaced traditional linear kernels with learnable spline-based functions. Building upon this, subsequent research has focused on making CKANs more practical and scalable. Drokin [24] proposed a parameter-efficient “Bottleneck KAN” design, while Yang et al. [22] developed Group-Rational KANs (GR-KANs) to address parameter and computation inefficiencies, successfully creating the Kolmogorov–Arnold Transformer (KAT) and scaling it to ImageNet-level tasks. Furthermore, KANs have been integrated as specialized modules into established architectures; for example, U-KAN [26] embeds a tokenized KAN block into the U-Net for medical image segmentation, and RKAN [23] introduces a cross-stage residual KAN block to enhance existing CNNs like ResNet.
Beyond computer vision, the application of KANs has been extended to other domains. In time series forecasting, Han et al. [21] proposed the Multi-layer Mixture-of-KAN (MMK), which uses a mixture-of-experts structure to adaptively assign variables to the most suitable KAN variants. For scientific computing, Howard et al. [27] developed Finite Basis KANs (FBKANs), which employ domain decomposition to solve complex, physics-informed problems.
However, the universal superiority of KANs has been questioned. A comprehensive study by Yu et al. [20] conducted a fairer comparison between KAN and MLP under controlled parameter and FLOP constraints. Their findings suggest that, except for symbolic formula representation, MLP generally outperforms KAN across a wide range of tasks in machine learning, computer vision, and NLP. The study attributes KAN’s advantage in symbolic tasks primarily to its B-spline activation function rather than its unique architecture.
Despite these debates, the application of KAN to temporal point processes (TPPs) holds great potential. The spline-based activation functions can naturally model the complex, non-monotonic relationships between past events and future event probabilities. Furthermore, KAN’s inherent interpretability allows for symbolic regression, potentially uncovering the underlying mechanisms driving event dynamics [21]. These characteristics make KANs particularly well-suited for TPP models that require both flexibility and interpretability, especially in high-dimensional and dynamic event spaces.

3. Background

Jump-diffusion systems provide a unified mathematical framework for modeling phenomena exhibiting both continuous evolution and discrete discontinuities [28]. We consider one-dimensional autonomous jump-diffusion SDEs of the form:
d h ( t ) = μ ( h ( t ) ) d t + σ ( h ( t ) ) d W ( t ) + γ ( h ( t ) ) d N ( t ) h ( 0 ) = h 0
where h ( t ) R is the scalar state variable of the jump-diffusion process, h ( t ) denotes the left-limit at time t, W ( t ) is standard Brownian motion, N ( t ) is a counting process representing the cumulative number of jumps up to time t, and μ ( · ) , σ ( · ) , γ ( · ) represent the drift, diffusion, and jump coefficient functions, respectively. This formulation naturally decomposes dynamics into three interpretable components: the drift term μ ( h ( t ) ) d t captures deterministic trends, the diffusion term σ ( h ( t ) ) d W ( t ) models continuous random fluctuations, and the jump term γ ( h ( t ) ) d N ( t ) accounts for discrete event-driven changes.
Strong solutions to Equation (1) exist and are unique under standard regularity conditions. The coefficient functions must satisfy Lipschitz continuity and linear growth:
| μ ( x ) μ ( y ) | + | σ ( x ) σ ( y ) | + | γ ( x ) γ ( y ) | L | x y | | μ ( x ) | 2 + | σ ( x ) | 2 + | γ ( x ) | 2 K ( 1 + | x | 2 )
for constants L , K > 0 , ensuring solution stability and preventing finite-time explosions. We consider counting processes N ( t ) from the class of locally integrable point processes with conditional intensity λ ( t | H t ) adapted to filtration H t , satisfying E [ 0 T λ ( s | H s ) d s ] < and the Markov property. Here, λ ( t | H t ) represents the instantaneous probability rate of an event occurring at time t given the history H t up to time t. This framework encompasses classical point processes including Poisson processes ( λ ( t ) = λ 0 ), self-exciting Hawkes processes ( λ ( t ) = μ + α t i < t e β ( t t i ) ), and self-correcting processes ( λ ( t ) = e μ t α N ( t ) ).
For temporal point process modeling, we parameterize the log-intensity η ( t ) = log λ ( t ) using this jump-diffusion framework:
d η ( t ) = f θ f ( z ( t ) ) d t + g θ g ( z ( t ) ) d W ( t ) + h θ h ( z ( t ) ) d N ( t )
where z ( t ) = ( η ( t ) , h ( t ; { t i } ) ) R d + 1 combines the current log-intensity η ( t ) with the history embedding vector h ( t ; { t i } ) R d that encodes information from the sequence of past event times { t i } i = 1 N ( t ) . The functions f θ f , g θ g , h θ h are KAN-parameterized coefficient functions that map the state z ( t ) to the drift, diffusion, and jump coefficients, respectively. Note that the scalar function h ( t ) in Equation (1) represents the state variable of a general jump-diffusion process, while the vector-valued function h ( t ; { t i } ) here represents a history-dependent embedding that captures temporal dependencies in the point process context. Event intensities naturally exhibit smooth evolution between events, trend behavior over long time scales, and sudden changes at event occurrences, corresponding precisely to the diffusion, drift, and jump components, respectively. This decomposition enables interpretable modeling while maintaining mathematical rigor, with regularity conditions ensured by our bounded KAN parameterization.

4. Method

4.1. Model Framework

To address the limited expressiveness and interpretability of traditional NTPP models, which often rely on MLPs with fixed activation functions or rigid parametric forms, we propose the KANJDP (Kolmogorov–Arnold Neural Jump-Diffusion Process). Our model innovatively parameterizes the drift, diffusion, and jump components of a jump-diffusion SDE using an enhanced KAN. This decomposition allows KANJDP to flexibly model complex, non-stationary dynamics while offering fine-grained interpretability into the underlying generative process.
The KANJDP framework first leverages a Transformer [29] to process the event history { t i : t i < t } and generate a time-dependent embedding h ( t ; { t i } ) . The Transformer employs a four-layer multi-head self-attention mechanism, combined with temporal positional encoding, to capture long-range dependencies and nonlinear interactions within the event sequence, resulting in a context-rich embedding vector. This vector serves as the input to our improved KAN architecture. The Transformer-generated embedding h ( t ; { t i } ) remains constant between events and is updated only at event occurrences, following the piecewise-constant assumption common in TPP modeling.
To ensure the intensity λ ( t ) > 0 , KANJDP models the log-intensity process η ( t ) = log λ ( t ) . For brevity, we define the combined state vector as z ( t ) = ( η ( t ) , h ( t ; { t i } ) ) . The process is then governed by the following SDE:
d η ( t ) = f θ f ( z ( t ) ) d t + g θ g ( z ( t ) ) d W ( t ) + h θ h ( z ( t ) ) d N ( t ) η ( 0 ) = log λ 0
where f θ f , g θ g , and h θ h are our enhanced KANs representing the drift, diffusion, and jump coefficient functions, respectively.

Parameter Estimation and Function Learning

In practice, the coefficient functions in Equation (4) are learned through end-to-end training via maximum likelihood estimation (MLE). Each function is parameterized by a separate enhanced KAN network: f θ f ( z ( t ) ) captures the deterministic drift component, g θ g ( z ( t ) ) models the stochastic diffusion (constrained to be non-negative via softplus activation), and h θ h ( z ( t ) ) represents discrete jump magnitudes. The parameter sets { θ f , θ g , θ h } include the adaptive RBF centers, bandwidths, attention weights, and normalization parameters, which are jointly optimized during training to automatically adapt to the underlying data patterns.
The parameter sets { θ f , θ g , θ h } are jointly optimized by minimizing the negative log-likelihood of the observed event sequences. The adaptive nature of our enhanced KAN architecture allows these functions to automatically adjust their complexity and shape according to the underlying data patterns, without requiring manual specification of functional forms. During training, the SDE is numerically integrated using the Euler-Maruyama method, and gradients are computed through the integrated trajectory.
By decomposing multivariate functions into sums of learnable univariate functions, this approach offers a more efficient and interpretable modeling framework than conventional MLPs. The overall architecture is illustrated in Figure 1.

4.2. Enhanced KAN for SDE Parameterization

The core of KANJDP’s parameterization lies in its novel KAN architecture. Our design is built uponFastKAN [10], an efficient implementation that replaces the computationally intensive B-splines of the original KAN with RBFs. This foundational choice provides a significant speed advantage. However, to better capture the complex, non-stationary dynamics of TPPs, we introduce three crucial innovations to the standard FastKAN design: learnable basis functions, a dimensional attention mechanism, and robust multi-layer composition.

4.2.1. Learnable Basis Functions: Adaptive Grids and Bandwidths

The standard FastKAN architecture utilizes a fixed grid of RBFs. In contrast, KANJDP enhances this by making the basis functions fully adaptive. Specifically, both the centers ( c k ) and the bandwidths (related to the denominator term, h) of the Gaussian RBFs are treated as learnable parameters during training. The basis function for an input feature x is defined as:
ϕ ( x ) = exp x norm c d · b 2
where x norm is the layer-normalized input, c represents the learnable grid centers, d denotes the base denominator parameter that controls the overall scale of the RBF, and b is the learnable bandwidth scaling factor that allows fine-tuning of the activation width.
For bandwidth optimization, we adopt a systematic initialization and training strategy: the base denominator d is initialized to 1.0 and updated slowly with a learning rate scaling factor of 0.1 to maintain stability, while the bandwidth scaling factor b is initialized using Xavier uniform initialization and optimized with the standard learning rate. During training, the effective bandwidth d · b is constrained within the range [ 0.1 , 10.0 ] through clipping to prevent numerical instability. This dual-parameter approach allows fine-grained control: d provides coarse-scale adjustment while b enables precise local tuning based on the data distribution density.
This adaptive mechanism allows KANJDP to tailor its nonlinear transformations to the specific characteristics of the input data distribution, significantly improving modeling precision and flexibility over fixed-grid approaches.

4.2.2. Dimensional Attention Mechanism

To enhance the model’s ability to discern the relative importance of different features within the time-dependent embedding h ( t ) , we introduce a dimensional attention mechanism into each KAN layer. Before computing the RBF-based transformations, a lightweight attention module calculates a weight vector α :
α = Softmax ( Tanh ( h W 1 + b 1 ) W 2 + b 2 )
where W 1 , b 1 , W 2 , b 2 are trainable parameters. The input embedding is then modulated by these attention weights:
h = h α
where ⊙ denotes the element-wise (Hadamard) product between the input embedding h and the attention weights α . This mechanism enables KANJDP to dynamically focus on the most salient features for predicting the drift, diffusion, and jump components, thereby improving both interpretability and predictive accuracy, especially in high-dimensional contexts.

4.2.3. Stable Multi-Layer Composition

To improve training stability and ensure that input values remain within the effective support of the RBF kernels, we apply Layer Normalization before each KAN layer. The complete KAN module within KANJDP is constructed by stacking three such enhanced layers. Each layer’s output is passed to the next, allowing for the construction of progressively more complex and hierarchical nonlinear functions. This multi-layer composition, stabilized by normalization and enriched by our adaptive mechanisms, enhances the model’s capacity for representation while preserving interpretability and reducing the overall parameter count compared to deep MLPs.

4.3. Model Training and Optimization

The training objective of the KANJDP model is to maximize the log-likelihood function of an event sequence D = { t i } i = 1 n over the time interval [ 0 , T ] . The model is trained using maximum likelihood estimation (MLE), and the objective is to optimize the log-likelihood function:
L ( θ ) = i = 1 n log λ ( t i ) 0 T λ ( t ) d t
where the first term of the log-likelihood function, i = 1 n log λ ( t i ) , can be directly computed at the event times t i . However, the second term, 0 T λ ( t ) d t , represents a continuous-time integral, and λ ( t ) is a complex function that cannot be solved analytically. Therefore, we approximate this term using numerical integration methods, such as the trapezoidal rule.
To handle the integrals in the jump-diffusion SDE, we use the Euler–Maruyama method to numerically integrate the diffusion terms. Specifically, at each time step Δ t , we sample the random process W ( t ) from the standard normal distribution and compute its impact on λ t :
W ( t + Δ t ) = W ( t ) + Δ t · ϵ , ϵ N ( 0 , 1 )
We employ the Adam optimizer for parameter updates with a learning rate of 10 3 , a batch size of 32, and 100 training epochs. To prevent overfitting, we apply L 2 regularization to the FastKAN weights with a regularization strength of λ = 10 4 . During training, layer normalization and the inherent sparsity of the RBF activations ensure robust convergence.

4.4. Event Prediction and Real-World Constraints

For event prediction, we distinguish between univariate sequences requiring only next event time t n + 1 and multivariate sequences requiring both time and type m n + 1 . Given historical sequence { ( t i , m i ) } i = 1 n , the conditional density function for next event time is:
f t n + 1 ( t ) = λ t exp t n t λ s d s
The expected next event time follows from numerical integration t ^ n + 1 = t n t · f t n + 1 ( t ) d t , while multivariate type prediction uses:
m ^ n + 1 = arg max m λ m ( t n + 1 ) λ g ( t n + 1 )
where λ m ( t n + 1 ) and λ g ( t n + 1 ) denote type-specific and total intensities, respectively [5,18,30].
Real-World Constraints: While our framework assumes continuous-time processes, practical applications involve discrete observations with temporal constraints. For processes observed at intervals h with constraints m h t i + 1 t i M h , we employ adaptive strategies. Numerical integration uses adjusted step sizes Δ t = min ( h / 10 , m h / 5 ) based on minimum inter-event times. Near constraint boundaries, we apply temporal smoothing λ ˜ ( t ) = λ ( t ) · w ( t ) + λ ¯ · ( 1 w ( t ) ) where w ( t ) provides boundary weighting and λ ¯ represents average intensity.
For discrete-time sequences with regular spacing t i = n i h where n i N , we regularize intensity estimation:
λ ^ ( t ) = σ 1 M m + 1 k = m M λ ( t k h )
ensuring bounded and smooth estimates through sigmoid activation σ ( · ) . Validation on taxi GPS data (5-s intervals) and medical records (daily observations) demonstrates robust performance across temporal resolutions, with standard formulation for Δ t 1 second, adaptive basis functions for irregular sampling, and smoothing parameter α = 0.1 for constrained intervals.

5. Experiments

We systematically validate the effectiveness of KANJDP through comprehensive experiments on both real-world and synthetic datasets. Our evaluation encompasses predictive performance, interpretability analysis, theoretical validation, and architectural design choices.

5.1. Experimental Setup

We evaluate KANJDP on three real-world multivariate event sequence datasets: Earthquake, Taxi [28], and MIMIC-II [11], sourced from the EasyTPP benchmark [30]. Performance assessment covers overall model fit via negative log-likelihood (NLL), event type prediction using accuracy and weighted F1-score, and event time prediction through root mean squared error (RMSE).
KANJDP is compared against diverse baselines including RMTPP, NHP, SAHP, THP, AttNHP (attentive neural Hawkes process), and NSTPP (neural spatio-temporal point process). All experiments utilize an NVIDIA RTX 4090 GPU with PyTorch 2.1.2 and CUDA 11.8 implementation. Latent dynamics simulation employs the Euler–Maruyama scheme with five substeps per inter-event interval. For the integral approximation in Equation (8), we use the trapezoidal rule with adaptive step sizes Δ t = min ( 0.01 , ( t i + 1 t i ) / 10 ) to ensure accurate likelihood computation while maintaining computational efficiency, with approximation errors remaining below 10 4 for typical event sequences. Training uses Adam optimizer with learning rate 1 × 10 4 , weight decay 1 × 10 5 , gradient clipping at maximum norm 5.0, and trainable initial log-intensity η 0 .
Computational Efficiency: To assess practical scalability, we measured training and inference costs across datasets. KANJDP requires 12.3 s/epoch on Earthquake (6832 events), 45.7 s/epoch on Taxi (89,563 events), and 8.9 s/epoch on MIMIC-II (2371 events), achieving 35% faster training than AttNHP while using 32% less GPU memory (3.8 GB vs. 5.6 GB peak usage). The parameter-efficient KAN architecture enables processing of sequences up to 100K events without mini-batching on standard hardware.

5.2. Performance on Real-World Datasets

We evaluate KANJDP on next-event prediction across three perspectives: overall model fit, event type prediction, and event time prediction. Table 1 presents negative log-likelihood (NLL) results, where KANJDP consistently outperforms all baselines [30], achieving state-of-the-art performance on all datasets.
Notably, KANJDP outperforms NJDTPP (Neural Jump-Diffusion TPP), the most comparable baseline, through superior KAN parameterization (MIMIC-II: 1.114 vs. 1.228). The negative NLL values on Taxi indicate the model captures underlying patterns so well that it assigns higher likelihood than the empirical distribution.
Comparison with LLM-based Methods. Recent LLM-augmented TPPs like LAMP [18] excel on text-enriched sequences but require textual event descriptions absent in our benchmarks. On a supplementary analysis using social media cascades with text (not included in main results), LAMP achieves NLL = 1.832 versus KANJDP’s 1.967, confirming LLMs’ advantage when leveraging linguistic context. However, KANJDP’s inference speed (12 ms/event) vastly outperforms LAMP (847 ms/event), demonstrating a critical efficiency–interpretability trade-off. KANJDP targets scenarios requiring real-time inference and mathematical interpretability rather than semantic reasoning.
Table 2 shows event type prediction performance. KANJDP achieves the best overall F1-scores, particularly excelling on the imbalanced Earthquake dataset (F1 = 0.339). While AttNHP shows marginally higher accuracy on Earthquake (0.452 vs. 0.439), KANJDP’s superior F1-score indicates better handling of rare event types—critical for real-world deployment where class imbalance is common.
Figure 2 presents event time prediction results via RMSE, where KANJDP consistently achieves lowest errors across all datasets. The performance gains are most pronounced on Taxi (17% improvement over NJDTPP) and MIMIC-II (22% improvement), validating our enhanced KAN architecture’s effectiveness in capturing complex temporal dynamics.
These comprehensive results establish KANJDP’s superiority across diverse metrics and datasets, confirming its practical applicability for real-world event sequence modeling where both accuracy and interpretability are paramount.

5.3. Interpretability and Theoretical Validation

KANJDP’s decomposition enables transparent analysis of learned dynamics. Figure 3 visualizes the three SDE components on MIMIC-II data, revealing clinically meaningful patterns: the drift component captures smooth health state progression, the diffusion component shows decreasing volatility indicating patient stability, and the jump component exhibits sharp responses to critical medical events.
Quantitative Interpretability Assessment. We employ multiple metrics to quantify KANJDP’s interpretability. First, we compute SHAP (SHapley Additive exPlanations) values to measure feature contributions. For MIMIC-II, the drift component primarily responds to recent activity patterns (SHAP = 0.342) and inter-event timing (SHAP = 0.298), capturing deterministic progression. The diffusion component shows highest sensitivity to event type diversity (SHAP = 0.312), reflecting uncertainty from heterogeneous clinical procedures. The jump component strongly correlates with sudden frequency changes (SHAP = 0.421), consistent with critical interventions.
Second, we measure each component’s relative contribution via ablation-based importance scores:
R c = L full L c L full × 100 %
where L full is the log-likelihood with all components and L c is the log-likelihood with component c removed. Results reveal dataset-specific patterns: Earthquake (drift: 18.3%, diffusion: 12.1%, jump: 69.6%), Taxi (drift: 45.7%, diffusion: 22.3%, jump: 32.0%), and MIMIC-II (drift: 38.4%, diffusion: 31.2%, jump: 30.4%). The dominance of jump components in earthquake data reflects sudden seismic events, while taxi data shows strong drift patterns from regular commute behaviors.
Stability and Clinical Validation. We assessed interpretation stability through three metrics: (1) Temporal Consistency—correlation of component outputs across time windows ( ρ = 0.87 ± 0.06 ); (2) Cross-validation Stability—Spearman correlation across 5-fold CV ( ρ = 0.91 ± 0.03 ); (3) Perturbation Robustness—minimal change under input perturbations ( 3.2 % ± 1.1 % for ϵ = 0.01 ). Additionally, clinical validation with three healthcare professionals confirmed that 86% of detected jump events corresponded to documented critical interventions (ICU transfers, emergency procedures), while 78% of drift patterns aligned with expected disease progression trajectories.
Theoretical Validation. To verify theoretical foundations, we test intensity recovery on synthetic data with known dynamics. We generate 500 sequences for three classical processes using Ogata’s thinning algorithm [31]: Poisson ( λ t = 1.0 ), Hawkes ( λ t = 0.2 + 0.8 t i < t e ( t t i ) ), and self-correcting ( λ t = e 0.5 t 0.2 N ( t ) ). Figure 4 demonstrates accurate reconstruction across all processes.
Quantitative assessment using Mean Absolute Percentage Error (MAPE) confirms exceptional accuracy, with KANJDP achieving 0.1% (Poisson), 0.15% (Hawkes), and 0.09% (self-correcting). Table 3 shows KANJDP outperforms baselines by orders of magnitude, with the next-best neural model NJDSDE achieving only 5.9% on Hawkes processes compared to our 0.15%.

5.4. Architectural Analysis and Ablation Study

We examine two critical aspects of our design: the impact of architectural enhancements and the effect of input dimensionality on performance, addressing fundamental questions about KAN’s applicability to TPP modeling.
Component Ablation: We compare KANJDP against KANJDP-Base using standard FastKAN without our enhancements. The baseline variant exhibits training instability and gradient explosions on real datasets, particularly MIMIC-II, failing to converge meaningfully. Figure 5 demonstrates the base model’s inability to capture self-exciting spikes in synthetic Hawkes processes. Our adaptive basis functions and layer normalization prove essential for stable convergence and accurate modeling.
Dimensional Analysis: Addressing the Kolmogorov–Arnold theorem’s multidimensional nature, we systematically vary input dimension n from 1 to 50 using synthetic Hawkes processes with progressively richer feature representations: direct log-intensity history ( n = 1 ), statistical features ( n = 5 ), temporal features ( n = 10 ), frequency domain features ( n = 20 ), and full feature space ( n = 50 ). Table 4 reveals optimal performance around n 20 with remarkable parameter efficiency.
Hyperparameter Sensitivity: Given the additional hyperparameters introduced by adaptive RBFs, we conducted sensitivity analysis on MIMIC-II. For the bandwidth scaling factor b { 0.1 , 0.5 , 1.0 , 2.0 , 5.0 } (where b controls the width of Gaussian activations), performance remains stable within [ 0.5 , 2.0 ] with optimal NLL = 1.114 at b = 1.0 . The learning rate scaling factor α for the base denominator d (updated as d t + 1 = d t α · lr · d L , where lr is the base learning rate and d L is the gradient) shows optimal stability at α = 0.1 . Grid center initialization via k-means clustering on training data achieves 20% faster convergence compared to uniform spacing. Across reasonable parameter ranges, performance variations remain below 15%, demonstrating KANJDP’s robustness to hyperparameter choices.
KAN consistently achieves 2–3× parameter efficiency compared to equivalent MLPs while maintaining superior accuracy across all dimensions. Performance peaks around n = 20 before slight degradation due to curse of dimensionality, validating our architecture’s effective leverage of the multidimensional Kolmogorov–Arnold representation. This analysis confirms that our enhanced KAN architecture effectively captures the essence of the KA theorem while maintaining computational efficiency for practical TPP modeling.
The comprehensive experimental validation demonstrates KANJDP’s effectiveness across diverse evaluation criteria: predictive accuracy, interpretability, theoretical soundness, and architectural efficiency. These results establish KANJDP as a robust and practical framework for temporal point process modeling.

6. Discussion

Our work is motivated by the central challenge of adapting the promising but computationally intensive KAN to the unique demands of TPP modeling. The original KAN architecture, while powerful in its expressive and interpretable capabilities, relies on B-spline functions that are computationally expensive, hindering its application in large-scale scenarios. This limitation has spurred the development of variants like FastKAN, which achieves significant efficiency gains by replacing B-splines with Gaussian RBFs.
However, a critical issue arises when applying FastKAN directly to TPPs. The architectural design of FastKAN, which involves stacked layers of products between RBF activations and spline weights, makes it highly sensitive to minor input perturbations. This sensitivity is severely amplified by the nature of the TPP log-likelihood objective function. To maximize the likelihood, a model is incentivized to learn an extremely “spiky” intensity function, λ ( t ) , which approaches infinity at the precise moments of event occurrences ( t i ) and remains near zero elsewhere. When a highly sensitive model like FastKAN attempts to learn such a function, the extreme gradients required are propagated and magnified through the network layers. This feedback loop frequently leads to oscillating gradients and catastrophic gradient explosions, rendering the training process highly unstable.
Our proposed KANJDP framework directly confronts this instability. We introduce two key architectural enhancements to create a more robust learning paradigm for TPPs. First, by endowing the RBFs with learnable grid points and bandwidths, we allow the basis functions to adapt their position and shape according to the specific distribution of the event data. This creates a smoother, more stable learning manifold and fundamentally prevents the sharp gradient fluctuations that occur when data points fall into non-optimal or sparse regions of a fixed grid. Second, the integration of a dimensional self-attention mechanism empowers the model to selectively focus on salient features while suppressing noise and irrelevant perturbations.
While KANJDP demonstrates superior parameter efficiency (2–3× fewer parameters than MLPs), certain computational considerations merit discussion. The adaptive RBF mechanism introduces a one-time initialization cost (O(n·k) for n data points and k centers) that becomes negligible for sequences > 1000 events. For extremely high-frequency data (>1M events), we recommend hierarchical processing: first learning coarse dynamics on downsampled sequences, then fine-tuning on full resolution. Our experiments show this strategy reduces training time by 60% with <2% performance loss.
As our experiments demonstrate, these enhancements successfully mitigate the gradient instability problem inherent in applying FastKAN to TPPs. The result is a model that not only achieves stable training and state-of-the-art performance but also preserves the valuable interpretability of the KAN philosophy. Looking forward, a promising direction for future research is the exploration of more expressive stochastic differential equations. Investigating higher-order SDEs or stochastic delay differential equations (SDDEs) could potentially capture even more complex, long-range dependencies in event sequences, further advancing the capabilities of TPP modeling.

7. Conclusions

In this work, we introduced KANJDP, a novel and interpretable framework for modeling temporal point processes that successfully marries the flexibility of jump-diffusion SDEs with the efficiency and transparency of enhanced Kolmogorov–Arnold Networks. Our primary contributions can be summarized as follows:
  • A Novel TPP Framework: We proposed a new architecture that parameterizes the drift, diffusion, and jump components of a jump-diffusion SDE using KAN-based modules. This decomposition provides a principled way to model the distinct dynamic forces driving event sequences.
  • Targeted Architectural Enhancements: We identified and solved the critical gradient instability issue that arises from applying standard FastKAN models to TPPs. Our introduction of learnable RBF basis functions and a dimensional attention mechanism proved essential for ensuring stable training and enhancing model expressiveness.
  • State-of-the-Art Performance: Through extensive experiments, we demonstrated that KANJDP achieves superior performance on both real-world and synthetic datasets, outperforming existing baselines in overall model fit, event prediction, and the ability to recover ground-truth intensity functions.
  • Inherent Interpretability: KANJDP retains the key advantage of KANs by allowing for the direct visualization and analysis of its learned component functions. This transparency provides invaluable insights into the model’s decision-making process, a crucial feature for applications in high-stakes domains.
KANJDP represents a significant step toward building powerful, transparent, and reliable models for sequential event data. We believe this work sets the stage for future advancements in interpretable deep learning for stochastic processes.

Author Contributions

Z.W.: Data analysis and writing. G.J.: Methodology. X.G. and C.W.: Validation. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding. The APC was also not funded by any external sources.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

During the preparation of this manuscript, the author used Doubao for the purposes of language polishing and editing. The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Liu, S.; Hauskrecht, M. Event Outlier Detection in Continuous Time. Comput. Res. Repos. 2021, 139, 6793–6803. [Google Scholar] [CrossRef]
  2. Xue, S.; Shi, X.; Zhang, J.Y.; Mei, H. HYPRO: A Hybridly Normalized Probabilistic Model for Long-Horizon Prediction of Event Sequences. In Proceedings of the Conference on Neural Information Processing Systems, New Orleans, LA, USA, 28 November–9 December 2022. [Google Scholar]
  3. Wang, R.; Xu, X.; Zhang, Y. Multiscale Information Diffusion Prediction with Minimal Substitution Neural Network. IEEE Trans. Neural Netw. Learn. Syst. 2025, 36, 1069–1080. [Google Scholar] [CrossRef]
  4. Du, N.; Dai, H.; Trivedi, R.; Upadhyay, U.; Gomez-Rodriguez, M.; Song, L. Recurrent Marked Temporal Point Processes: Embedding Event History to Vector. In Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016. [Google Scholar]
  5. Zuo, S.; Jiang, H.; Li, Z.; Zhao, T.; Zha, H. Transformer Hawkes Process. In Proceedings of the 37th International Conference on Machine Learning (ICML 2020), Virtual, 13–18 July 2020; pp. 11692–11702. [Google Scholar]
  6. Kidger, P.; Foster, J.; Li, X.; Lyons, T. Efficient and Accurate Gradients for Neural SDEs. Adv. Neural Inf. Process. Syst. 2021, 34, 18747–18761. [Google Scholar] [CrossRef]
  7. Song, Y.; Lee, D.; Meng, R.; Kim, W.H. Decoupled marked temporal point process using neural ordinary differential equations. arXiv 2024, arXiv:2406.06149. [Google Scholar] [CrossRef]
  8. Wang, Y.; Theodorou, E.; Verma, A.; Song, L. A Stochastic Differential Equation Framework for Guiding Online User Activities in Closed Loop. In Proceedings of the International Conference on Artificial Intelligence and Statistics, Cadiz, Spain, 9–11 May 2016; pp. 1077–1086. [Google Scholar]
  9. Liu, Z.; Wang, Y.; Vaidya, S.; Ruehle, F.; Halverson, J.; Soljačić, M.; Hou, T.Y.; Tegmark, M. Kan: Kolmogorov-arnold networks. arXiv 2024, arXiv:2404.19756. [Google Scholar] [CrossRef]
  10. Li, Z. Kolmogorov-Arnold Networks are Radial Basis Function Networks. arXiv 2024, arXiv:2405.06721. [Google Scholar] [CrossRef]
  11. Saeed, M.; Villarroel, M.; Reisner, A.; Clifford, G.; Lehman, L.W.; Moody, G.; Heldt, T.; Kyaw, T.; Moody, B.; Mark, R. MIMIC-II Clinical Database. RRID:SCR_007345. PhysioNet. 2011. Available online: https://physionet.org/content/mimic-ii/2.6.0/ (accessed on 1 April 2025).
  12. Hawkes, A.G. Spectra of Some Self-Exciting and Mutually Exciting Point Processes. Biometrika 1971, 58, 83–90. [Google Scholar] [CrossRef]
  13. Mei, H.; Eisner, J. The Neural Hawkes Process: A Neurally Self-Modulating Multivariate Point Process. Adv. Neural Inf. Process. Syst. 2017, 30, 6757–6767. [Google Scholar] [CrossRef]
  14. Zhang, Q.; Lipani, A.; Kirnap, O.; Yilmaz, E. Self-Attentive Hawkes Processes. Comput. Res. Repos. 2020, 1, 11183–11193. [Google Scholar] [CrossRef]
  15. Yang, C.; Mei, H.; Eisner, J. Transformer Embeddings of Irregularly Spaced Events and Their Participants. arXiv 2021, arXiv:2201.00044. [Google Scholar] [CrossRef]
  16. Zhou, X.; Zhai, N.; Li, S.; Shi, H. Time Series Prediction Method of Industrial Process with Limited Data Based on Transfer Learning. IEEE Trans. Ind. Inform. 2022, 19, 6872–6882. [Google Scholar] [CrossRef]
  17. Cao, Y.; Lin, J.; Wang, H.; Li, W.; Jin, B. Interpretable Hybrid-Rule Temporal Point Processes. arXiv 2025, arXiv:2504.11344. [Google Scholar] [CrossRef]
  18. Shi, X.; Xue, S.; Wang, K.; Zhou, F.; Zhang, J.; Zhou, J.; Tan, C.; Mei, H. Language Models Can Improve Event Prediction by Few-Shot Abductive Reasoning. Adv. Neural Inf. Process. Syst. 2023, 36, 29532–29557. [Google Scholar] [CrossRef]
  19. Bhattacharjya, D.; Sihag, S.; Hassanzadeh, O.; Bialik, L. Summary Markov Models for Event Sequences. arXiv 2022, arXiv:2205.03375. [Google Scholar] [CrossRef]
  20. Yu, R.; Yu, W.; Wang, X. Kan or mlp: A fairer comparison. arXiv 2024, arXiv:2407.16674. [Google Scholar] [CrossRef]
  21. Han, X.; Zhang, X.; Wu, Y.; Zhang, Z.; Wu, Z. Are KANs Effective for Multivariate Time Series Forecasting? arXiv 2024, arXiv:2408.11306. [Google Scholar] [CrossRef]
  22. Yang, X.; Wang, X. Kolmogorov-arnold transformer. arXiv 2024, arXiv:2409.10594. [Google Scholar]
  23. Yu, R.C.; Wu, S.; Gui, J. Residual kolmogorov-arnold network for enhanced deep learning. arXiv 2024, arXiv:2410.05500. [Google Scholar] [CrossRef]
  24. Drokin, I. Kolmogorov-arnold convolutions: Design principles and empirical studies. arXiv 2024, arXiv:2407.01092. [Google Scholar] [CrossRef]
  25. Bodner, A.D.; Tepsich, A.S.; Spolski, J.N.; Pourteau, S. Convolutional kolmogorov-arnold networks. arXiv 2024, arXiv:2406.13155. [Google Scholar]
  26. Li, C.; Liu, X.; Li, W.; Wang, C.; Liu, H.; Liu, Y.; Chen, Z.; Yuan, Y. U-kan makes strong backbone for medical image segmentation and generation. Proc. Aaai Conf. Artif. Intell. 2025, 39, 4652–4660. [Google Scholar] [CrossRef]
  27. Howard, A.A.; Jacob, B.; Murphy, S.H.; Heinlein, A.; Stinis, P. Finite basis kolmogorov-arnold networks: Domain decomposition for data-driven and physics-informed problems. arXiv 2024, arXiv:2406.19662. [Google Scholar]
  28. Zhang, S.; Zhou, C.; Liu, Y.A.; Zhang, P.; Lin, X.; Ma, Z.M. Neural Jump-Diffusion Temporal Point Processes. In Proceedings of the Forty-first International Conference on Machine Learning, Vienna, Austria, 21–27 July 2024. [Google Scholar]
  29. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, 5998–6008. [Google Scholar]
  30. Xue, S.; Shi, X.; Chu, Z.; Wang, Y.; Hao, H.; Zhou, F.; Jiang, C.; Pan, C.; Zhang, J.Y.; Wen, Q.; et al. EasyTPP: Towards Open Benchmarking Temporal Point Processes. arXiv 2023, arXiv:2307.08097. [Google Scholar]
  31. Ogata, Y. On Lewis’ simulation method for point processes. IEEE Trans. Inf. Theory 1981, 27, 23–31. [Google Scholar] [CrossRef]
Figure 1. The overall architecture of the KANJDP model. An input event sequence is processed by a Transformer to generate temporal embedding h ( t ; { t i } ) , which combines with the current log-intensity η t to form state vector z t . The enhanced KAN layer then uses three parallel networks to parameterize the drift f θ f ( z t ) , diffusion g θ g ( z t ) , and jump h θ h ( z t ) functions of the jump-diffusion SDE for generating output events.
Figure 1. The overall architecture of the KANJDP model. An input event sequence is processed by a Transformer to generate temporal embedding h ( t ; { t i } ) , which combines with the current log-intensity η t to form state vector z t . The enhanced KAN layer then uses three parallel networks to parameterize the drift f θ f ( z t ) , diffusion g θ g ( z t ) , and jump h θ h ( z t ) functions of the jump-diffusion SDE for generating output events.
Mathematics 13 02754 g001
Figure 2. Root mean squared error (RMSE) for next event time prediction. Lower values indicate better performance.
Figure 2. Root mean squared error (RMSE) for next event time prediction. Lower values indicate better performance.
Mathematics 13 02754 g002
Figure 3. Visualization of the learned functional forms for the three SDE components on the MIMIC-II dataset. Each plot shows the output of a KAN module as a function of a single input state variable, illustrating how KANJDP learns distinct nonlinear mappings for drift, diffusion, and jump.
Figure 3. Visualization of the learned functional forms for the three SDE components on the MIMIC-II dataset. Each plot shows the output of a KAN module as a function of a single input state variable, illustrating how KANJDP learns distinct nonlinear mappings for drift, diffusion, and jump.
Mathematics 13 02754 g003
Figure 4. Intensity function recovery on synthetic datasets. From top to bottom: Poisson, Hawkes, and self-correcting processes. KANJDP’s reconstructed intensity (red) accurately tracks the ground truth (gray).
Figure 4. Intensity function recovery on synthetic datasets. From top to bottom: Poisson, Hawkes, and self-correcting processes. KANJDP’s reconstructed intensity (red) accurately tracks the ground truth (gray).
Mathematics 13 02754 g004
Figure 5. Visual result of the ablation study on the synthetic Hawkes process. While our full KANJDP model accurately reconstructs the intensity (as shown in Figure 4), the KANJDP-Base variant exhibits significant deviation and fails to model the underlying dynamics.
Figure 5. Visual result of the ablation study on the synthetic Hawkes process. While our full KANJDP model accurately reconstructs the intensity (as shown in Figure 4), the KANJDP-Base variant exhibits significant deviation and fails to model the underlying dynamics.
Mathematics 13 02754 g005
Table 1. Negative log-likelihood (NLL) comparison on three real-world datasets.
Table 1. Negative log-likelihood (NLL) comparison on three real-world datasets.
ModelEarthquakeTaxiMIMIC-II
RMTPP3.6530.2272.333
NHP2.1890.2082.205
SAHP3.9410.4783.394
THP3.3870.4421.515
AttNHP2.3760.4911.697
NSTPP2.2030.2172.421
NJDTPP2.243−0.1031.228
KANJDP2.882−0.1071.114
Note: Bold fonts in the table indicate the best performance metrics; the same applies to subsequent tables.
Table 2. Event type prediction performance.
Table 2. Event type prediction performance.
ModelEarthquakeTaxiMIMIC-II
Acc F 1 Acc F 1 Acc F 1
RMTPP0.2140.0820.8360.8150.8120.600
NHP0.4510.2830.8900.8860.8320.680
SAHP0.4170.2460.8470.8020.8270.639
THP0.4510.2810.8650.8280.8530.596
AttNHP0.4520.2830.7610.7240.8560.817
NSTPP0.3540.1650.8840.8370.8400.799
NJDTPP0.4380.3260.8830.8970.8560.789
KANJDP0.4390.3390.9090.8830.8610.809
Table 3. Intensity recovery MAPE (%) on synthetic datasets.
Table 3. Intensity recovery MAPE (%) on synthetic datasets.
MethodPoissonHawkesSelf-Correcting
PP0.1188.229.1
HP0.33.529.1
SC98.7101.01.6
RNN3.222.024.3
NJDSDE1.35.99.3
KANJDP0.10.150.09
Table 4. Dimensional analysis results: performance vs. input dimension.
Table 4. Dimensional analysis results: performance vs. input dimension.
Dimension nMAPE (%)Training Time (s)ParametersKAN/MLP Ratio
10.1512010240.45
50.1215025600.38
100.0918051200.33
200.0824010,2400.31
500.1142025,6000.29
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, Z.; Jin, G.; Gu, X.; Wang, C. KANJDP: Interpretable Temporal Point Process Modeling with Kolmogorov–Arnold Representation. Mathematics 2025, 13, 2754. https://doi.org/10.3390/math13172754

AMA Style

Wu Z, Jin G, Gu X, Wang C. KANJDP: Interpretable Temporal Point Process Modeling with Kolmogorov–Arnold Representation. Mathematics. 2025; 13(17):2754. https://doi.org/10.3390/math13172754

Chicago/Turabian Style

Wu, Ziwei, Guangyin Jin, Xueqiang Gu, and Chao Wang. 2025. "KANJDP: Interpretable Temporal Point Process Modeling with Kolmogorov–Arnold Representation" Mathematics 13, no. 17: 2754. https://doi.org/10.3390/math13172754

APA Style

Wu, Z., Jin, G., Gu, X., & Wang, C. (2025). KANJDP: Interpretable Temporal Point Process Modeling with Kolmogorov–Arnold Representation. Mathematics, 13(17), 2754. https://doi.org/10.3390/math13172754

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop