Abstract
Kolmogorov–Arnold Networks employ learnable univariate activation functions on edges rather than fixed node nonlinearities. Standard B-spline implementations require parameters per layer (K basis functions, W connections). We introduce shared Gaussian radial basis functions with learnable centers and widths maintained globally per layer, reducing parameter complexity to for L layers—a threefold reduction, while preserving Sobolev convergence rates . Width clamping at and tripartite regularization ensure numerical stability. On MNIST with architecture and , RBF-KAN achieves test accuracy versus for B-spline KAN with speedup and 33% memory reduction, though generalization gap increases from to due to global Gaussian support. Physics-informed neural networks demonstrate substantial improvements on partial differential equations: elliptic problems exhibit a reduction in PDE residual and maximum pointwise error, decreasing from to ; parabolic problems achieve a accuracy gain; hyperbolic wave equations show a improvement in maximum error and a reduction in norm. Superior hyperbolic performance derives from infinite differentiability of Gaussian bases, enabling accurate high-order derivatives without polynomial dissipation. Ablation studies confirm that coefficient regularization reduces mean error by 40%, while center diversity prevents basis collapse. Optimal basis count balances expressiveness and overfitting. The architecture establishes Gaussian RBFs as efficient alternatives to B-splines for learnable activation networks with advantages in scientific computing.