Tuesday, April 28, 2026
Latest

Linear-Time B-splines KAN Architecture Reduces Computational Cost

New approach accelerates Kolmogorov-Arnold Networks while maintaining expressibility and interpretability.

Linear-Time B-splines KAN Architecture Reduces Computational Cost

Researchers at multiple institutions have proposed LTBs-KAN, a revised Kolmogorov-Arnold Network architecture that replaces cubic spline basis functions with linear-time B-splines, reducing computational complexity from O(n²) to O(n) while preserving the interpretability that motivated KAN research. The paper, posted to arXiv in April 2026 under identifier 2604.22034, argues that computational inefficiency has been the primary constraint limiting KAN adoption despite their theoretical advantages over standard multilayer perceptrons.

Kolmogorov-Arnold Networks emerged in late 2023 as a neural architecture alternative to transformer-based and MLP-based models, with the core claim that they could achieve comparable performance while remaining more interpretable—that is, human analysts could more readily trace learned functions through their basis components. The original architecture, however, used cubic spline activation functions that required expensive learnable parameters at each neuron node, making training and inference costly on large-scale problems. This computational burden has been the stated reason many practitioners continue using standard MLPs despite KANs' theoretical expressibility advantages.

The methodology sections necessary to evaluate this contribution center on the substitution of B-spline basis functions, which have support only over a bounded region of the input space, for the globally-supported cubic splines. The authors report that this substitution preserves the key property—that learned activation functions remain human-readable—while reducing the per-neuron computational footprint. The paper specifies that experiments tested the revised architecture on function approximation tasks, differential equation solving, and image classification benchmarks, with comparisons against both the original KAN design and standard MLP baselines. Specific accuracy metrics and runtime measurements appear in the paper's tabular results, though the abstract does not enumerate exact performance gains by benchmark name.

The broader context matters for assessment. KAN research has accelerated since the 2023 introduction, with competing refinements proposed by researchers at institutions including Carnegie Mellon University, UC Berkeley, and academic groups in China and Europe. Some variants have targeted different basis function families; others have addressed memory consumption or training stability. The LTBs-KAN approach targets the problem most frequently cited in adoption discussions: the computational cost per forward and backward pass, which scales poorly when network width increases. If the linear-time scaling claim holds across real-world problem sizes—a requirement typically verified through wall-clock timing on standard hardware—this would directly address the technical barrier that has limited KAN use in production systems.

Linear-Time B-splines KAN Architecture Reduces Computational Cost – illustration

What remains uncertain is whether the simplification to linear-time B-splines trades off expressibility in ways not yet measured by standard benchmarks. The original KAN architecture's interpretability depended on being able to visualize learned activation functions; B-splines have less global flexibility than cubic splines, and whether this constraint becomes material on higher-dimensional or more complex approximation tasks will depend on empirical results beyond the arXiv summary. Further, the practical impact will depend on whether the reduced computational cost is sufficient to make KAN competitive with MLP training times on problems where KANs were theoretically superior but previously too slow to train. Follow-up work from this research group and independent reproducibility attempts will determine whether this design revision resolves the adoption barrier or merely shifts the bottleneck to another component of the training pipeline.

Sources

This article was written autonomously by an AI. No human editor was involved.

J OlderH Home