Papers
Topics
Authors
Recent
Search
2000 character limit reached

Explicit Formulas for Continued-Fraction Coefficients

Updated 23 January 2026
  • Explicit formulas for continued-fraction coefficients provide closed-form expressions that compute key terms in series expansions essential for filtering, control, and spectral theory.
  • Methods such as the Galerkin projection and kernel-based techniques yield practical, finite-sum recursions, offering quantifiable bias and variance in numerical approximations.
  • These formulas are pivotal in applications ranging from nonlinear Bayesian filtering to optimal control, ensuring computational tractability and rigorous error analysis.

An explicit formula for continued-fraction coefficients refers to closed-form or algorithmically computable expressions that yield the terms (coefficients) of a continued fraction expansion for a given function, sequence, or class of numbers. Continued fractions play a central role in analytic number theory, orthogonal polynomials, nonlinear filtering, and the numerical solution of certain functional equations—most notably, Poisson- or Riccati-type equations that arise in Bayesian filtering and optimal control, where connection to explicit coefficient formulas often emerges through kernel-based or data-driven discretizations.

1. Continued Fractions: Definitions and Structural Context

A general continued fraction is an expression of the form

a0+b1a1+b2a2+b3a3+a_0 + \cfrac{b_1}{a_1 + \cfrac{b_2}{a_2 + \cfrac{b_3}{a_3 + \ddots}}}

where the sequences {an}\{a_n\} and {bn}\{b_n\} are the coefficients. In classical mathematical contexts, explicit formulas refer to closed-form expressions for these coefficients, often expressed in terms of the parameters of the original function being expanded.

Continued fractions have historical significance in number theory (best rational approximations, quadratic irrationals), spectral theory, and the theory of orthogonal polynomials (Stieltjes and Jacobi continued fractions).

2. Appearance of Explicit Coefficients in Filtering and Kernel Methods

A contemporary and practically significant domain where explicit continued-fraction-like coefficients arise is in the approximation of solutions to Poisson equations with probability-weighted Laplacians—central to the feedback particle filter (FPF) in nonlinear Bayesian filtering. The FPF gain function K(x)K(x), which determines the correct feedback for particles, is the gradient of a potential solving

Δρϕ(x)=h(x)h^,Δρϕ=1ρ(ρϕ)-\Delta_\rho\phi(x) = h(x) - \hat h, \quad \Delta_\rho\phi = \frac{1}{\rho}\nabla \cdot (\rho\nabla \phi)

This equation admits spectral and series solutions, and, crucially, various numerical approximations (Galerkin, kernel-based, diffusion map, and decomposition-based) lead to explicit, finite-sum formulas for the "coefficients" of discrete continued fractions or related expansions.

For example, in polynomial (Hermite) bases, the Galerkin projection yields an explicit linear system for the coefficients in terms of moments of the underlying process (Taghvaei et al., 2016). Similarly, the kernel or diffusion map approach produces closed finite-sum expressions for the gain at particle locations, with coefficient matrices arising from data-driven discrete orthogonality relations (Taghvaei et al., 2016, Taghvaei et al., 2019, Pathiraja et al., 2021).

3. Analytical and Data-Driven Derivations of Coefficient Formulas

There are several key algorithmic strategies to obtain explicit coefficient formulas:

a. Galerkin Method for Poisson Equations

  • Choose a basis {ψj(x)}\{\psi_j(x)\} (e.g., polynomials or orthogonal functions).
  • Expand ϕ(x)jcjψj(x)\phi(x) \approx \sum_j c_j \psi_j(x).
  • The coefficients cjc_j solve a linear system

[ψjψρdx]c=(hh^)ψjρdx\sum_\ell \left[\int \nabla\psi_j\cdot \nabla\psi_\ell\, \rho\,dx \right] c_\ell = \int (h - \hat h) \psi_j \rho\, dx

These integrals—computed via particles—yield explicit numerical values for the coefficients, which correspond, in a tridiagonal-banded case, to classical continued-fraction coefficients (Taghvaei et al., 2016).

b. Kernel and Diffusion Map Methods

  • Construct an empirical kernel matrix TijT_{ij} using Gaussian weights.
  • Use the fixed point equation Φ=TΦ+ϵ(HH^)\Phi = T\Phi + \epsilon(H - \hat H) to solve for Φ\Phi, the discrete potential.
  • The gain K(x)K(x) at a particle is given by an explicit sum:

K(Xi)=jTijΦj(XjkTikXk)K(X^i) = \sum_{j} T_{ij} \Phi_j (X^j - \sum_k T_{ik} X^k)

or, more generally, as K(Xi)=jsijXjK(X^i) = \sum_j s_{ij} X^j for computed sijs_{ij} (Taghvaei et al., 2019, Taghvaei et al., 2016, Pathiraja et al., 2021).

c. Decomposition Approaches for Polynomial Observables

  • For polynomial observation functions h(x)h(x), the decomposition method represents the solution as a sum over Hermite (or other orthogonal) polynomial terms with coefficients constructed via recurrence (e.g., backward recursion) and normalization conditions (Wang et al., 31 Mar 2025).
  • The explicit formula for the kkth coefficient—say, K^ki\hat K^i_k—is obtained via the three-term recurrence for Hermite polynomials and involves only known (or previously computed) lower-order coefficients.

4. Error Analysis, Theoretical Guarantees, and Practical Considerations

The explicit coefficient formulas in these approaches possess quantifiable error properties:

  • Bias: For kernel/diffusion methods, bias decays as O(ϵ)O(\epsilon) (kernel bandwidth), with explicit constants derived from the spectral gap and regularity of the underlying density (Taghvaei et al., 2016, Taghvaei et al., 2019).
  • Variance: Scales as O(N1/2ϵ(1+d/4))O(N^{-1/2}\epsilon^{-(1+d/4)}), where NN is the number of particles and dd the state dimension; explicit expressions are available for both (Taghvaei et al., 2016).
  • Computational Complexity: Explicit coefficient formulas arising from sparse/decomposition methods scale linearly with the number of particles and polynomial degree, while kernel-based formulas are O(N2)O(N^2) but can be accelerated via sparsification (Wang et al., 31 Mar 2025, Taghvaei et al., 2016).

A summary of bias-variance scaling is provided below:

Method Bias Variance Matrix Structure
Galerkin O(1/λM)O(1/\lambda_M) (projection) O(1/N)O(1/\sqrt{N}) (sampling) Dense, M×MM\times M linear system
Kernel/Diffusion O(ϵ)O(\epsilon) O(1/(Nϵd/2))O(1/(N\epsilon^{d/2})) Row-stochastic kernel
Decomposition Asympt. 0 for poly hh O(1/N)O(1/\sqrt{N}) (for KDE approx.) Triangular recurrences

5. Connections to Classical Continued Fractions and Modern Applications

While classical continued-fraction coefficients are strictly defined in algebraic or orthogonal polynomial settings, the coefficient formulas emerging from particle-based and data-driven kernel methods are structurally analogous; they represent the numerically exact expansion of the solution in a finite orthonormal or kernel-induced function system.

This approach extends beyond filtering: explicit coefficient formulas for continued-fraction-type expansions arise in optimal control (Riccati recursion), inverse problems, and backstepping control for PDEs—in the latter, neural operator or Hermite expansions of gain and kernel functions are computed with explicit coefficient recursions, crucial for stability and expressivity (Vazquez et al., 2024).

6. Summary Table: Paradigmatic Formula Types

Domain Coefficient Formula Structure Reference
Orthogonal Poly (Jacobi/Stieltjes) Three-term or continued-fraction, closed-form for special weights Classical (not in above)
FPF: Galerkin Linear system from basis inner products (Taghvaei et al., 2016)
FPF: Kernel Explicit finite sum with data-driven weights (Taghvaei et al., 2016, Taghvaei et al., 2019)
FPF: Decomposition Recursion with polynomial/Gaussian moments (Wang et al., 31 Mar 2025)
PDE Backstepping ODE/PDE, explicit boundary coefficient (Vazquez et al., 2024)

7. Concluding Remarks

Explicit formulas for continued-fraction coefficients are central wherever functional solutions are projected onto discrete bases or reconstructed via data-driven methods. Modern approaches in filtering, control, and kernel-based numerical algorithms routinely yield such formulas as finite-dimensional, explicit recursions or matrix equations. These representations are critical for achieving both computational tractability and rigorous error control in nonlinear, non-Gaussian, and high-dimensional statistical inference problems (Taghvaei et al., 2016, Taghvaei et al., 2019, Wang et al., 31 Mar 2025).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Explicit Formula for Continued-Fraction Coefficients.