Next Article in Journal
On Errors of Signal Estimation Using Complex Singular Spectrum Analysis
Previous Article in Journal
Mapless Navigation with Deep Reinforcement Learning in Indoor Environment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Kolmogorov–Arnold Networks for System Identification of First- and Second-Order Dynamic Systems †

1
Control Systems Department, Faculty of Electronics and Automation, Technical University Sofia, Branch Plovdiv, 4000 Plovdiv, Bulgaria
2
Center of Competence “Smart Mechatronic, Eco- and Energy-Saving Systems and Technologies”, 5300 Gabrovo, Bulgaria
*
Author to whom correspondence should be addressed.
Presented at the 14th International Scientific Conference TechSys 2025—Engineering, Technology and Systems, Plovdiv, Bulgaria, 15–17 May 2025.
Eng. Proc. 2025, 100(1), 100059; https://doi.org/10.3390/engproc2025100059
Published: 30 July 2025

Abstract

System identification—originating in the 1950s from statistical theory—has since developed a wealth of algorithms, insights, and practical expertise. We introduce Kolmogorov–Arnold neural networks (KANs) as an interpretable alternative for model discovery. Leveraging KANs’ inherent property to approximate data and interpret it by employing learnable activation functions and decomposition of multivariate mappings into univariate transforms, we test its ability to recover the step responses of first- and second-order systems both numerically and symbolically. We employ synthetic datasets, both noise-free and with Gaussian noise, and find that KANs can achieve very low RMSE and parameter error with simple architectures. Our results demonstrate that KANs combine ease of implementation with symbolic transparency, positioning them as a compelling bridge between classical identification and modern machine learning.

1. Introduction

Understanding and modeling dynamic systems is a fundamental part of control engineering, data processing, and science-centered problems as a whole. Correctly identifying the underlying systems from observational data, known as system identification, is essential. Accurate methods for system identification directly help engineers develop, simulate, control, and optimize real-world systems. The field has its origins in the 1950s [1], grounded in statistical theory. It has grown dramatically over the decades, and a base of great theoretical foundations of algorithms and techniques has been developed. Some of the fundamental aspects of the traditional approach to system identification, focusing on the core components that define the field, include the following: (1) the system’s observed input–output data, (2) a set of parameterized candidate models, known as the model structure, (3) an estimation technique for fitting model parameters to the data, and (4) a validation process to guide the selection of an appropriate model. Among these, choosing the right model structure is especially critical. Recursive models, such as the ARX model—a linear difference equation relating inputs and outputs—are commonly used in linear system identification. They serve as universal approximators for linear systems, capable of achieving high accuracy given sufficiently high model orders. However, selecting appropriate structural parameters is essential to ensure a good fit [2].
Classical techniques like the Prediction Error Method, Least Squares, and Subspace Identification Methods provide reliable ways for parameter estimation of linear systems [3]. However, they can prove to be insufficiently accurate when dealing with highly nonlinear dynamics, systems with mostly unknown internal structures, or heavily noisy data [2,4]. As a result, neural networks are gaining popularity as an alternative due to their ability to approximate complex nonlinear mappings with arbitrary precision [4,5]. Architectures such as feedforward neural networks, recurrent neural networks, and physics-informed neural networks have been applied to a wide range of system identification problems with promising results [4,5]. These models, however, often suffer from a lack of interpretability, sensitivity to hyperparameter tuning, and challenges with generalization.
Kolmogorov–Arnold Networks (KANs), inspired by the Kolmogorov–Arnold representation theorem, offer a novel framework that decomposes multivariate functions into compositions of univariate nonlinear transformations and additive operations. Unlike conventional deep neural networks that rely on learned weights and fixed activation functions, KANs can learn both the structure and the nonlinearities of the system, enabling more interpretable and potentially more accurate approximations with fewer parameters [6].
There has been a rapid increase in the literature on the topic of system identification with neural networks in the last few years, with some notable contributions from the deep learning community. For instance, non-asymptotic learning theory has been applied to analyze the performance of neural estimators in system identification [7]. Physics-informed models have also been shown to incorporate prior knowledge into models effectively, and as a result, they reduce the amount of data needed and improve the overall fit of models [8,9]. However, interpretability still continues to be a challenge in the field. Most neural networks operate as black boxes, and extracting the internal mechanisms of deep learning models remains difficult. KANs can directly attack this issue by providing easy-to-understand, readable models due to their structural features.
This study explores the potential of KANs for the purpose of system identification. We create synthetic datasets of both clean and noisy time-response of first- and second-order systems and approximate them by training KAN models and then interpret the underlying equations, which describe the time-response of the systems, by making use of the architecture’s inherent capabilities for symbolic representation of the activation functions. We then compare the modeled responses and equations to the real ones to gauge the potential for future use for real-world scenarios. The results are promising and point to the conclusion that KANs might prove to be a solid alternative—they are easy to implement, reach sufficient levels of accuracy using only a relatively small number of parameters, and provide opportunity for easy and intuitive interpretation of unknown systems.

2. Methods

2.1. Kolmogorov–Arnold Networks as an Approximation Tool

In comparison to MLPs, which are based on the universal approximation theorem [10], KANs [6] are based on the Kolmogorov–Arnold representation theorem [11], which states that any multivariate function f ( x )     [ 0 ,   1 ] n     R that is continuous and smooth can be represented by a finite sum of continuous univariate functions, as follows:
f x = f x 1 , , x n = q = 1 2 n + 1 Φ q p = 1 n ϕ q , p x p ,
In this formulation, where ϕ q , p   :   0 ,   1   R and Φ q   :   R   R , each ϕ q , p is a univariate function, and the outer functions Φ q combine the inner univariate components to reconstruct f ( x ) . Equation (1) exactly describes a 2-layer KAN with a hidden layer width of 2 n   +   1 and an output dimension of 1. The Kolmogorov–Arnold representation theorem is extended to a more general KAN structure of arbitrary depth and width by making use of the structure of Equation (1) and stacking individual layers [6], exploiting the hierarchical structure of Equation (1) [6].
x l + 1 = Φ l x l = ϕ l , 1,1 ( ) ϕ l , 1,2 ( ) ϕ l , 1 , n l ( ) ϕ l , 2,1 ( ) ϕ l , 2,2 ( ) ϕ l , 2 , n l ( ) ϕ l , n l + 1,1 ( ) ϕ l , n l + 1,2 ( ) ϕ l , n l + 1 , n l ( ) x l
Φ l is the function matrix corresponding to the l t h KAN layer. A general KAN network can be composed of L layers: given an input vector x 0     R n 0 , the output of KAN is as follows:
K A N x = Φ L 1 Φ L 2 Φ 1 Φ 0 x
A single layer with n i n -dimensional inputs and n o u t -dimensional outputs can be described as a matrix of 1D functions as follows:
Φ = ϕ q , p ,     p = 1,2 , , n i n   , q = 1,2 ,   , n _ o u t    
Each ϕ q , p has trainable parameters. In the original Kolmogorov–Arnold theorem, the inner functions form a layer with n i n   =   n and n o u t   =   2 n   +   1 , while the outer functions form another layer with n i n   =   2 n   +   1 and n o u t   =   1 . Because these operations are all differentiable, KANs can be trained through standard backpropagation [12,13].
The activation function ϕ ( x ) for a layer is defined as follows:
ϕ x = w b b x + w s s p l i n e ( x )
where b ( x ) is a basis function and s p l i n e ( x ) is a spline function of the form as follows:
b x = s i l u x = x 1 + e x    
s p l i n e x = i c i B i ( x )
The weights w b , w s , and the coefficients c j are all learned during training, while the grid resolution and polynomial degree of the spline basis B i ( x ) are treated as user-tuned hyperparameters.
The second revision of the architecture introduces multiplication operations to help with the approximation of multiplicative structures. This version is named MultKAN, and the difference lies in the transformation from one layer’s subnodes to the next layer’s nodes. In the new version, addition nodes are directly copied from corresponding subnodes, and multiplication nodes realize multiplication on k subnodes from the previous layer [13].
A MultKAN layer is composed of a standard KAN layer, denoted as Φ , and a multiplication layer, M . The KAN layer Φₗ takes an input vector x     R and produces an output z l   =   Φ l ( x l )     R n l + 1 a + 2 n l + 1 m . The multiplication layer has two components: one that performs pairwise multiplications between subnodes and another that applies the identity transformation. The entire MultKAN layer can be concisely expressed as Ψ     M     Φ . Therefore, the full MultKAN network is structured as follows:
M u l t K A N x = Ψ L Ψ L 1 Ψ 1 Ψ 0 x

2.2. Kolmogorov–Arnold Networks for System Identification

In this paper, we investigate the use of Kolmogorov–Arnold Networks for the purposes of system identification by looking at two cases—first- and second-order dynamic systems—and finding the parameters and underlying functions that describe their input-output relationships.

2.2.1. Datasets

We generate all datasets synthetically using the open-source pykan library. The datasets represent the time domain response of two canonical dynamic systems to a unit step input:
  • First-order system: Modeled by the transfer function G 1 s = K τ s + 1 with a time-domain transient characteristic of the kind K 1     e t τ . We test for the following parameter values: K = 2 ,   τ = 10 ; 20 ; 40 ; 50 .
  • Second-order underdamped system: Modeled by the transfer function G 2 s = K ω n 2 s 2 + 2 ζ ω n s + ω n 2 with a time-domain transient characteristic of the kind x ( t ) = 1 e ζ ω n t c o s ω d t + ζ 1 ζ 2 s i n ω d t . We test for the following values: ζ = 0.5 ;   0.25 ;   0.1 ,   ω n = 1 .
For the first-order system, we simulate two variants—a noise-free response and a noisy response, where zero-mean Gaussian white noise is added to the output signal [14], and for the second-order system in this study, we explore only the noise-free variant.
Each dataset consists of 10,000 discrete time samples, recorded at a fixed sampling interval of 0.01 s. We split each dataset into 8000 training samples and 2000 testing samples.

2.2.2. Network Architecture

For the first-order systems, we use the standard KAN implementation with only addition on the nodes. The starting architecture has the shape [1,5,1], and we train it with regularization and grid extension to reach the final form of [1,1,1] after pruning. The starting size was chosen after some initial testing of wider and deeper networks proved to be inefficient and led to unsatisfactory loss parameters.
For the second-order systems, we employ the MultKAN with starting architecture [1,[0,6],1], and we again train with regularization and grid extension to reach a final form of [1,[0,2],1] after pruning. A comparison of initial and final architecture can be seen in Figure 1.

2.2.3. Training Procedure

The training parameters we use include the following:
  • Loss function: We use mean squared error (MSE) between the predicted output and the true system response.
  • Optimizer: LBFGS [15] with a learning rate of 0.1.
  • Regularization coefficient: For both first- and second-order systems, we use regularization coefficient lamb = 0.0001, which corresponds to the λ in the following equation:
    l t o t a l = l p r e d + λ μ 1 l = 0 L 1 Φ 1 1 + μ 2 l = 0 L 1 S Φ l ,
    And controls the overall regularization magnitude as follows:
  • Grid extension: We start at grid size 3 and employ grid extension between trainings as suggested in the original KAN paper [6] to deal with loss plateaus and improve accuracy.
  • Batch size: We train on the whole dataset.
Each model is trained separately on the first-order or second-order datasets, with and without noise, yielding three distinct KAN instances. After the initialization of the model and a grid extension cycle with regularization, we prune the insignificant edges and nodes and run one more training cycle with the final model shape before evaluating and fixing the symbolic functions on the edges and running final trainings. This process proves successful for both first- and second-order systems with slight parameter adjustments, necessary due to the difference in model size.

2.2.4. Evaluation Metrics

We evaluate identification performance with the help of the root mean square error (RMSE) on the test set, the parameter estimation error (the absolute difference between true and learned values of the parameters of the first- and second-order systems), and a comparison of the graphs of simulated and real outputs. We also use the library’s capabilities for extracting the symbolic formulas and presenting them in a readable format to compare to the initial equations we selected for the creation of the synthetic datasets. These metrics allow us to assess both the accuracy of the time domain predictions and the fidelity of the recovered system parameters.

3. Results

Our experiments demonstrate that KANs effectively recover both the time-domain behavior and the underlying symbolic models of first- and second-order systems under noise-free and noisy conditions. The original equation and the one recovered by the network match exactly.
Time domain accuracy: Across all four KAN instances, the average test set RMSE remained around the order of ten to the minus fourth degree for unfixed activation functions and around ten to the minus eighth degree after fixing the appropriate functions for both the noise-free models and the noisy ones—Table 1 and Table 2.
Parameter recovery: Learned coefficients were accurate to the seventh decimal place for the noise-free datasets and to the fifth decimal place for the noisy datasets—the recovered equations yield the same time-domain plots as shown in Figure 2 and Figure 3.
Symbolic function extraction: The extracted symbolic equations matched the original time-domain response equations of each system exactly, confirming correct structural recovery.
We also observed that KANs can be treated as a filter for noisy data, recovering the underlying system dynamics despite the noise.

Comparison with MLPs and ARX Kolmogorov–Arnold Networks for System Identification

We trained an MLP-based network implemented in PyTorch 2.2.2 on the same datasets. The MLP is designed as a fully connected feedforward neural network composed of configurable hidden layers with ReLU activation functions. The architecture is of the shape [1,32,32,1] for the first-order systems and [1,64,64,1] for the second-order systems. We train with the Adam optimizer [16], a learning rate of 1 × 10 3 , and the MSE loss function. The test error for noise-free first- and second-order datasets is, respectively, 7.6 × 10 5 and 9.2 × 10 5 ; for noisy first-order datasets, it is 2.298 × 10 3 ; and for noisy second-order datasets, it is 2.544 × 10 3 .
Our tests concluded that KANs can reach sufficient accuracy with fewer network parameters than MLPs, and while the numerical results obtained are comparable and plot comparisons of estimated and actual output are satisfactory, as seen in Figure 4, there’s no direct possibility for model interpretation in the MLP case. Without employing further identification techniques, MLP models remain a black box.
These results confirm our hypothesis that KANs can both approximate dynamic responses accurately and reveal the underlying model equations.

4. Discussion

We find that KANs can be a competitive alternative to traditional identification techniques by combining high predictive accuracy with symbolic interpretability while using a relatively small number of parameters. This architecture offers both interpretability and transparency by allowing us to extract the closed-form equations directly and solve the black-box nature problem of other methods. Their robustness to synthetically added noise also promises that they could offer a solution for real-world problems.
Our study has mostly exploratory character, as its scope is quite limited—we test only for single-input and single-output systems and use only synthetic data. Expanding to multivariable or systems of higher order will most certainly increase model complexity, training time, and parameter count. We also observe that the standard machine learning intuition that helps for hyperparameter tuning often does not directly transfer when it comes to KANs.
Future study might include investigation of KAN’s efficiency for system identification on noisy datasets for systems of higher order or real-world datasets, including multi-input multi-output systems and nonlinear dynamics.
Overall, KANs offer a promising framework for system identification, marrying data-driven modeling with interpretable symbolic output.

Author Contributions

Conceptualization, L.C. and V.P.; methodology, L.C. and V.P.; software, L.C.; validation, L.C. and V.P.; formal analysis, L.C. and V.P.; investigation, L.C.; resources, L.C. and V.P.; data curation, L.C.; writing—original draft preparation, L.C.; writing—review and editing, L.C. and V.P.; visualization, L.C.; supervision, V.P.; project administration, V.P.; funding acquisition, V.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the European Regional Development Fund within the OP “Research, Innovation and Digitalization Programme for Intelligent Transformation 2021–2027”, Project No. BG16RFPR002-1.014-0005, Center of competence “Smart Mechatronics, Eco- and Energy Saving Systems and Technologies, funder: Ministry of Innovation and Growth of the Republic of Bulgaria.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
KANKolmogorov–Arnold neural network
LBFGSLimited-Memory Broyden–Fletcher–Goldfarb–Shanno algorithm
MSEmean squared error
MLPmultilayer perceptron
RMSEroot mean square error

References

  1. Zadeh, L.A. On the identification problem. IRE Trans. Circuits Theory 1956, 3, 277–281. [Google Scholar] [CrossRef]
  2. Pillonetto, G.; Chen, T.; Chiuso, A.; De Nicolao, G.; Ljung, L. Regularized System Identification: Learning Dynamic Models from Data; Communications and Control Engineering Series; Springer International Publishing AG: Cham, Switzerland, 2022. [Google Scholar]
  3. Lennart, L. System Identification: Theory for the User, 2nd ed.; Prentice Hall Information and System Sciences Series; Prentice Hall PTR: Upper Saddle River, NJ, USA, 1999. [Google Scholar]
  4. Gianluigi, P.; Aravkin, A.; Gedon, D.; Ljung, L.; Ribeiro, A.H.; Schön, T.B. Deep Net-works for System Identification: A Survey. arXiv 2023, arXiv:2301.12832. [Google Scholar] [CrossRef]
  5. Chen, S.; Billings, S.A.; Grant, P.M. Non-Linear System Identification Using Neural Networks. Int. J. Control. 1990, 51, 1191–1214. [Google Scholar] [CrossRef]
  6. Liu, Z.; Wang, Y.; Vaidya, S.; Ruehle, F.; Halverson, J.; Soljačić, M.; Hou, H.Y.; Tegmark, M. KAN: Kolmogorov-Arnold Networks. arXiv 2025, arXiv:2404.19756. [Google Scholar] [CrossRef]
  7. Ziemann, I.; Tsiamis, A.; Lee, B.; Jedra, Y.; Matni, N.; Pappas, G.J. A Tutorial on the Non-Asymptotic Theory of System Identification. arXiv 2023, arXiv:2309.03873. [Google Scholar] [CrossRef]
  8. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-Informed Neural Networks: A Deep Learning Framework for Solving Forward and Inverse Problems Involving Nonlinear Partial Differential Equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
  9. Haywood-Alexander, M.; Arcieri, G.; Kamariotis, A.; Chatzi, E. Response Estimation and System Identification of Dynamical Systems via Physics-Informed Neural Networks. arXiv 2024, arXiv:2410.01340. [Google Scholar] [CrossRef] [PubMed]
  10. Hornik, K.; Stinchcombe, M.; White, H. Multilayer Feedforward Networks Are Universal Approximators. Neural Netw. 1989, 2, 359–366. [Google Scholar] [CrossRef]
  11. Kolmogorov, A.N.; Arnol’d, V.; Boltjanskiĭ, V.; Efimov, N.; Èskin, G.; Koteljanskiĭ, D.; Krasovskiĭ, N.; Men’šov, D.; Portnov, I.; Ryškov, S.; et al. On the representation of continuous functions of many variables by superposition of continuous functions of one variable and addition. In Doklady Akademii Nauk; Russian Academy of Sciences: Moscow, Russia, 1957; Volume 114, pp. 953–956. [Google Scholar]
  12. Werbos, P.J. ‘Backpropagation through Time: What It Does and How to Do It. Proc. IEEE 1990, 78, 1550–1560. [Google Scholar] [CrossRef]
  13. Liu, Z.; Ma, P.; Wang, Y.; Matusik, W.; Tegmark, M. KAN 2.0: Kolmogorov-Arnold Networks Meet Science. arXiv 2024, arXiv:2408.10205. [Google Scholar] [CrossRef]
  14. Marmarelis, V.Z. Nonlinear Dynamic, Appendix II: Gaussian White Noise. In Modeling of Physiological Systems, 1st ed.; Wiley: Hoboken, NJ, USA, 2004; pp. 499–501. [Google Scholar] [CrossRef]
  15. Liu, D.C.; Nocedal, J. On the Limited Memory BFGS Method for Large Scale Optimization. Math. Program. 1989, 45, 503–528. [Google Scholar] [CrossRef]
  16. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2017, arXiv:1412.6980. [Google Scholar] [CrossRef]
Figure 1. (a) Initial model shape after training with regularization and after pruning for the first-order system and (b) initial model shape after training with regularization and after pruning for the second-order system.
Figure 1. (a) Initial model shape after training with regularization and after pruning for the first-order system and (b) initial model shape after training with regularization and after pruning for the second-order system.
Engproc 100 100059 g001
Figure 2. Plot comparisons of time-response to a unit step input of original first-order functions and the ones approximated by the KAN models—noise-free and with Gaussian white noise for system parameters: (a) K = 2 ;   τ = 10 ; (b) K = 2 ;   τ = 20 ; and (c) K = 2 ;   τ = 50 .
Figure 2. Plot comparisons of time-response to a unit step input of original first-order functions and the ones approximated by the KAN models—noise-free and with Gaussian white noise for system parameters: (a) K = 2 ;   τ = 10 ; (b) K = 2 ;   τ = 20 ; and (c) K = 2 ;   τ = 50 .
Engproc 100 100059 g002
Figure 3. Plot comparisons of time-response to a unit step input of original second-order functions and approximated by the KAN models for system parameters: (a) ω n = 1 ;   ζ = 0.5 ; (b) ω n = 1 ;   ζ = 0.25 ; and (c) ω n = 1 ;   ζ = 0.1 .
Figure 3. Plot comparisons of time-response to a unit step input of original second-order functions and approximated by the KAN models for system parameters: (a) ω n = 1 ;   ζ = 0.5 ; (b) ω n = 1 ;   ζ = 0.25 ; and (c) ω n = 1 ;   ζ = 0.1 .
Engproc 100 100059 g003
Figure 4. Plot comparisons of time-response to a unit step input of original functions and approximated by the MLP models—noise-free and with Gaussian white noise for: (a) first-order system K = 2 ;   τ = 50 and (b) second-order system ω n = 1 ;   ζ = 0.5 .
Figure 4. Plot comparisons of time-response to a unit step input of original functions and approximated by the MLP models—noise-free and with Gaussian white noise for: (a) first-order system K = 2 ;   τ = 50 and (b) second-order system ω n = 1 ;   ζ = 0.5 .
Engproc 100 100059 g004
Table 1. Test loss results and parameter recovery for first-order system.
Table 1. Test loss results and parameter recovery for first-order system.
System ParametersTest Loss (Noise-Free and with Noise)Parameter Recovery
(Noise-Free)
K = 2 ; τ = 10 3.67   × and 5.12 × 10 3 K =   1.99999983330918
τ = 9.99999819211726
K = 2 ; τ = 20 4.52 × 10 5 and 5.14 × 10 3 K = 1.99978709220886 ;
τ = 20.0180257893
K = 2 ; τ = 50 1.48 × 10 7 and 5.22 × 10 3 K = 1.99999990551827 ;
τ = 50.0000069833
Table 2. Test loss results and parameter recovery for second-order system.
Table 2. Test loss results and parameter recovery for second-order system.
System ParametersTest LossParameter Recovery
ω n = 1 ; ζ = 0.5 7.94 × 10 8 ω n = 0.99999998 ; ζ = 0.5
ω n = 1 ; ζ = 0.25 1.81 × 10 7   ω n = 1 ; ζ = 0.25
ω n = 1 ; ζ = 0.1 3.58 × 10 6   ω n = 0.994961857795715 ;
ζ = 0.099986051359080
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chiparova, L.; Popov, V. Kolmogorov–Arnold Networks for System Identification of First- and Second-Order Dynamic Systems. Eng. Proc. 2025, 100, 100059. https://doi.org/10.3390/engproc2025100059

AMA Style

Chiparova L, Popov V. Kolmogorov–Arnold Networks for System Identification of First- and Second-Order Dynamic Systems. Engineering Proceedings. 2025; 100(1):100059. https://doi.org/10.3390/engproc2025100059

Chicago/Turabian Style

Chiparova, Lily, and Vasil Popov. 2025. "Kolmogorov–Arnold Networks for System Identification of First- and Second-Order Dynamic Systems" Engineering Proceedings 100, no. 1: 100059. https://doi.org/10.3390/engproc2025100059

APA Style

Chiparova, L., & Popov, V. (2025). Kolmogorov–Arnold Networks for System Identification of First- and Second-Order Dynamic Systems. Engineering Proceedings, 100(1), 100059. https://doi.org/10.3390/engproc2025100059

Article Metrics

Back to TopTop