Papers
Topics
Authors
Recent
Search
2000 character limit reached

KAPI-ELM: Adaptive Physics-Informed ELM

Updated 20 January 2026
  • The paper demonstrates that adaptive kernel hyperparameter optimization enables highly accurate solutions for stiff and oscillatory PDEs.
  • KAPI-ELM bypasses iterative backpropagation by employing a single-shot least-squares solve enhanced with Bayesian optimization for tuning RBF parameters.
  • Empirical results show that KAPI-ELM outperforms traditional PINNs with orders-of-magnitude lower error and significantly reduced computation time.

The Kernel-Adaptive Physics-Informed Extreme Learning Machine (KAPI-ELM) is a class of solvers for forward and inverse partial differential equation (PDE) problems that leverages adaptive radial basis function (RBF) representations and distributional hyperparameter optimization to resolve sharp gradients, multiscale structure, or oscillatory phenomena. As an extension of Physics-Informed Extreme Learning Machines (PI-ELMs), KAPI-ELM combines the efficiency of closed-form least-squares learning with a hyperparameter-adaptive RBF kernel framework, outperforming classical and neural PDE solvers in regimes involving stiffness, boundary layers, or high-frequency structure (Dwivedi et al., 14 Jul 2025, Dwivedi et al., 13 Jan 2026).

1. Foundations: Physics-Informed Extreme Learning Machines

PI-ELMs are single-layer RBF networks for PDE regression, designed to bypass the iterative backpropagation and architectural tunability of Physics-Informed Neural Networks (PINNs). The standard PI-ELM architecture expands the solution u(x)u(x) as

u^(x;Win,bin,Wout)=j=1Ncjϕj(x)\hat{u}(x; W_{\rm in}, b_{\rm in}, W_{\rm out}) = \sum_{j=1}^{N^*} c_j\,\phi_j(x)

where each ϕj(x)\phi_j(x) is a fixed RBF, e.g. exp(xμj2/σj2)\exp(-\|x-\mu_j\|^2/\sigma_j^2) in higher dimensions. The input layer (RBF centers μj\mu_j and widths σj\sigma_j) is randomized and held fixed. Only output weights cjc_j are optimized, requiring a single linear solve for PDE-constrained collocation. For NcN_c collocation and NbN_b boundary points, the design matrix HH and right-hand side r\mathbf{r} encode the residuals and constraints: Hij=[Lϕj](xi),copt=HrH_{ij} = [\mathcal{L}\phi_j](x_i), \quad \mathbf{c}_{\rm opt} = H^\dagger\,\mathbf{r} This “single-shot” least-squares approach is computationally efficient but exhibits limited adaptivity to localized solution structure (Dwivedi et al., 14 Jul 2025).

2. Kernel Adaptivity: Distributional Parameter Optimization

KAPI-ELM introduces adaptivity by parameterizing the distribution from which RBF centers and widths are sampled. Instead of tuning all 2N2N^* hidden node parameters, KAPI-ELM posits that RBF parameters {μj,σj}\{\mu_j,\sigma_j\} are drawn i.i.d. from probability laws p(μ;w)p(\mu;\mathbf{w}) and p(σ;w)p(\sigma;\mathbf{w}) with low-dimensional hyperparameters w\mathbf{w}. These may regulate, for instance, the concentration of RBFs near sharp gradients (e.g., via Gaussian mixtures or location-scale families). Optimizing w\mathbf{w}—rather than every hidden-unit parameter—brings focused compression, enabling the network to cluster basis functions where PDE solutions have steep or fine structure: μjπbaseU(Ω)+πadapN(m,S),logσjN(a,b)\mu_j \sim \pi_{\rm base}\,\mathcal{U}(\Omega) + \pi_{\rm adap}\,\mathcal{N}(m, S),\qquad \log\sigma_j \sim \mathcal{N}(a, b) The hyperparameters (m,S,a,b)(m,S,a,b), defining the distribution shapes and locations, are tuned to minimize the physics-informed loss. This structure enables KAPI-ELM to adapt hidden-layer expressivity without the cost of full backpropagation or high-dimensional gradient descent (Dwivedi et al., 14 Jul 2025).

3. Training Procedure: Bayesian Optimization and Least Squares

KAPI-ELM formulates training as a bi-level learning process:

  • Inner loop: For fixed hyperparameters w\mathbf{w}, RBF centers and widths are sampled, the corresponding design matrix HH is constructed, and output weights are solved in closed form:

copt=Hr\mathbf{c}_{\rm opt} = H^\dagger\,\mathbf{r}

Loss is evaluated as the supremum norm residual, J(w)=HcoptrJ(\mathbf{w})=\|H\,\mathbf{c}_{\rm opt}-\mathbf{r}\|_\infty.

  • Outer loop: w\mathbf{w} is optimized via Bayesian Optimization (BO), using a Gaussian Process (GP) surrogate for J(w)J(\mathbf{w}) and maximizing the Expected Improvement (EI) acquisition function:

EI(w)=E[max(0,JminJ(w))]\operatorname{EI}(\mathbf{w}) = \mathbb{E}\big[ \max(0, J_{\min} - J(\mathbf{w})) \big]

  • Iteration: For each candidate w\mathbf{w}, a new configuration is tested, the loss is re-evaluated, the GP surrogate is updated, and optimization continues until convergence in w\mathbf{w} or JJ.

The algorithmic structure is summarized in the following pseudocode (Dwivedi et al., 14 Jul 2025):

1
2
3
4
5
6
7
8
9
10
Initialize w randomly, set tolerances ε, J_tol.
for k=1 to k_max:
  1. Sample RBF parameters {μ,σ}  p(·|w_k).
  2. Form design matrix H(μ,σ).
  3. c_k = H^ · r.
  4. J_k = H(μ,σ) c_k  r_.
  5. Update GP posterior with (w_k, J_k).
  6. w_{k+1} = argmax_w EI(w) under GP.
  stop if w_{k+1}w_k < ε or J_k < J_tol.
return final w_opt and c_opt.

4. Physics-Informed Kernel Ansatz and Soft Partition Extensions

The core solution representation in KAPI-ELM is an expansion in Gaussian RBFs: u^(x)=i=1Mciexp[(xαi)22σi2]\hat u(x) = \sum_{i=1}^M c_i\, \exp\left[-\frac{(x-\alpha_i)^2}{2 \sigma_i^2}\right] Recent developments introduce deterministic, low-dimensional parameterizations (“soft partitioning”) where partition lengths j\ell_j and a global scale kσk_\sigma deterministically set the positions αi\alpha_i and widths σi\sigma_i of RBFs. This soft partition manifold is defined by a small vector of partition parameters in 1D or 2D: M={(αi(),σi())i=1MΔk1}\mathcal{M} = \bigl\{(\alpha_i(\boldsymbol{\ell}), \sigma_i(\boldsymbol{\ell}))_{i=1}^M\,\big|\,\boldsymbol{\ell} \in \Delta_{k-1}\bigr\} Centers are distributed with higher density and narrower widths in regions with anticipated solution complexity (e.g., boundary layers, oscillatory subdomains). The soft partition–based KAPI-ELM admits efficient and continuous coarse-to-fine adaptation, which is particularly beneficial for highly oscillatory, multiscale, or stiff PDEs (Dwivedi et al., 13 Jan 2026).

For irregular domains, signed-distance-based weighting is employed, improving the stability of least-squares learning and boundary constraint enforcement.

5. Quantitative Performance and Benchmarks

KAPI-ELM matches or exceeds the expressivity of advanced methods such as XTFC or PINNs with significantly fewer tunable parameters. Empirical results demonstrate:

Method # Tunable Params Typical Runtime 1D CD Accuracy (ν=104\nu=10^{-4})
PINN 1300\sim 1300 10510^5 steps, >>1 hour fails (<101<10^{-1} error)
PI-ELM $0$ hidden O(LS solve)O(\text{LS solve}), seconds 10110^{-1}10210^{-2}
XTFC $10,000$ hidden O(LS solve)O(\text{LS solve}), minutes 10310^{-3}
KAPI-ELM $6$–$50$ (hyper) BO+LS, minutes 10610^{-6}

For a 1D singularly perturbed convection-diffusion equation with ν=104\nu=10^{-4}, KAPI-ELM attains L106L_\infty \approx 10^{-6} using N1275N^*\approx 1275 RBFs, outperforming PI-ELM and XTFC, which use considerably more neurons or parameters. In 2D steady-state Poisson problems with sharp sources, KAPI-ELM achieves L2L_2 error 105\sim 10^{-5} in under 30 seconds, where a PINN with 5300\sim 5300 parameters requires over an hour and exhibits larger error. In time-dependent advection, adaptive RBFs track Gaussian profiles accurately across time blocks without loss of sharpness (Dwivedi et al., 14 Jul 2025, Dwivedi et al., 13 Jan 2026).

On highly oscillatory or multiscale ODE/PDE test cases, soft partition KAPI-ELM attains residuals Hcb107\|Hc-b\|_\infty\sim 10^{-7} on simple ODEs in $0.1$ s and solves irregular-domain Poisson or biharmonic problems with MSE <1010<10^{-10}, outperforming contemporary PINN, FBPINN, and Deep-TFC variants by orders of magnitude in error and runtime (Dwivedi et al., 13 Jan 2026).

6. Advantages, Limitations, and Prospective Extensions

Advantages:

  • Single-shot linear (Moore–Penrose) solve replaces gradient-based optimization.
  • Low-parameter, interpretable adaptation of RBFs via distributional or partition hyperparameters.
  • Efficient handling of localized stiffness, sharp internal layers, and oscillatory phenomena.
  • Extension to inverse problems by including governing PDE parameters within the optimized hyperparameters.

Limitations:

  • Currently restricted to linear PDEs; nonlinear extension via successive linearization is an open direction.
  • Scaling is limited by the cost of large-scale pseudoinverse computations.
  • Choice of RBF count and baseline widths relies on heuristics or simple curriculum strategies; full automation is lacking.

Extensions:

Proposed directions include:

  • Addressing nonlinear PDEs through embedded Picard iterations or similar successive linearization.
  • Scaling to very large basis sets with randomized or iterative LS solvers (e.g., conjugate gradients).
  • Automatic model selection, potentially with Bayesian optimization over the entire kernel configuration.
  • Hybrid kernel representations, combining Gaussian and sinusoidal (Fourier) features to optimize for multiscale and mixed-spectral character (Dwivedi et al., 14 Jul 2025, Dwivedi et al., 13 Jan 2026).

7. Theoretical and Practical Significance

KAPI-ELM provides an overview of theoretical insight and computational efficiency in PDE-constrained learning. Kernel width adaptation overcomes the spectral bias inherent in gradient-trained neural nets, enabling high-frequency accuracy without recourse to Fourier features or deep architectures. The linear structure confers interpretability and analytical tractability, while the deterministic manifold (soft partition) parameterization facilitates reproducible, architecture-free, and fast solution of multiscale and singularly perturbed problems. This establishes KAPI-ELM as a competitive alternative and a scalable foundation for physics-informed kernel methods in scientific machine learning (Dwivedi et al., 14 Jul 2025, Dwivedi et al., 13 Jan 2026).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Kernel-Adaptive Physics-Informed Extreme Learning Machine (KAPI-ELM).