Explicit Formulas for Continued-Fraction Coefficients
- Explicit formulas for continued-fraction coefficients provide closed-form expressions that compute key terms in series expansions essential for filtering, control, and spectral theory.
- Methods such as the Galerkin projection and kernel-based techniques yield practical, finite-sum recursions, offering quantifiable bias and variance in numerical approximations.
- These formulas are pivotal in applications ranging from nonlinear Bayesian filtering to optimal control, ensuring computational tractability and rigorous error analysis.
An explicit formula for continued-fraction coefficients refers to closed-form or algorithmically computable expressions that yield the terms (coefficients) of a continued fraction expansion for a given function, sequence, or class of numbers. Continued fractions play a central role in analytic number theory, orthogonal polynomials, nonlinear filtering, and the numerical solution of certain functional equations—most notably, Poisson- or Riccati-type equations that arise in Bayesian filtering and optimal control, where connection to explicit coefficient formulas often emerges through kernel-based or data-driven discretizations.
1. Continued Fractions: Definitions and Structural Context
A general continued fraction is an expression of the form
where the sequences and are the coefficients. In classical mathematical contexts, explicit formulas refer to closed-form expressions for these coefficients, often expressed in terms of the parameters of the original function being expanded.
Continued fractions have historical significance in number theory (best rational approximations, quadratic irrationals), spectral theory, and the theory of orthogonal polynomials (Stieltjes and Jacobi continued fractions).
2. Appearance of Explicit Coefficients in Filtering and Kernel Methods
A contemporary and practically significant domain where explicit continued-fraction-like coefficients arise is in the approximation of solutions to Poisson equations with probability-weighted Laplacians—central to the feedback particle filter (FPF) in nonlinear Bayesian filtering. The FPF gain function , which determines the correct feedback for particles, is the gradient of a potential solving
This equation admits spectral and series solutions, and, crucially, various numerical approximations (Galerkin, kernel-based, diffusion map, and decomposition-based) lead to explicit, finite-sum formulas for the "coefficients" of discrete continued fractions or related expansions.
For example, in polynomial (Hermite) bases, the Galerkin projection yields an explicit linear system for the coefficients in terms of moments of the underlying process (Taghvaei et al., 2016). Similarly, the kernel or diffusion map approach produces closed finite-sum expressions for the gain at particle locations, with coefficient matrices arising from data-driven discrete orthogonality relations (Taghvaei et al., 2016, Taghvaei et al., 2019, Pathiraja et al., 2021).
3. Analytical and Data-Driven Derivations of Coefficient Formulas
There are several key algorithmic strategies to obtain explicit coefficient formulas:
a. Galerkin Method for Poisson Equations
- Choose a basis (e.g., polynomials or orthogonal functions).
- Expand .
- The coefficients solve a linear system
These integrals—computed via particles—yield explicit numerical values for the coefficients, which correspond, in a tridiagonal-banded case, to classical continued-fraction coefficients (Taghvaei et al., 2016).
b. Kernel and Diffusion Map Methods
- Construct an empirical kernel matrix using Gaussian weights.
- Use the fixed point equation to solve for , the discrete potential.
- The gain at a particle is given by an explicit sum:
or, more generally, as for computed (Taghvaei et al., 2019, Taghvaei et al., 2016, Pathiraja et al., 2021).
c. Decomposition Approaches for Polynomial Observables
- For polynomial observation functions , the decomposition method represents the solution as a sum over Hermite (or other orthogonal) polynomial terms with coefficients constructed via recurrence (e.g., backward recursion) and normalization conditions (Wang et al., 31 Mar 2025).
- The explicit formula for the th coefficient—say, —is obtained via the three-term recurrence for Hermite polynomials and involves only known (or previously computed) lower-order coefficients.
4. Error Analysis, Theoretical Guarantees, and Practical Considerations
The explicit coefficient formulas in these approaches possess quantifiable error properties:
- Bias: For kernel/diffusion methods, bias decays as (kernel bandwidth), with explicit constants derived from the spectral gap and regularity of the underlying density (Taghvaei et al., 2016, Taghvaei et al., 2019).
- Variance: Scales as , where is the number of particles and the state dimension; explicit expressions are available for both (Taghvaei et al., 2016).
- Computational Complexity: Explicit coefficient formulas arising from sparse/decomposition methods scale linearly with the number of particles and polynomial degree, while kernel-based formulas are but can be accelerated via sparsification (Wang et al., 31 Mar 2025, Taghvaei et al., 2016).
A summary of bias-variance scaling is provided below:
| Method | Bias | Variance | Matrix Structure |
|---|---|---|---|
| Galerkin | (projection) | (sampling) | Dense, linear system |
| Kernel/Diffusion | Row-stochastic kernel | ||
| Decomposition | Asympt. 0 for poly | (for KDE approx.) | Triangular recurrences |
5. Connections to Classical Continued Fractions and Modern Applications
While classical continued-fraction coefficients are strictly defined in algebraic or orthogonal polynomial settings, the coefficient formulas emerging from particle-based and data-driven kernel methods are structurally analogous; they represent the numerically exact expansion of the solution in a finite orthonormal or kernel-induced function system.
This approach extends beyond filtering: explicit coefficient formulas for continued-fraction-type expansions arise in optimal control (Riccati recursion), inverse problems, and backstepping control for PDEs—in the latter, neural operator or Hermite expansions of gain and kernel functions are computed with explicit coefficient recursions, crucial for stability and expressivity (Vazquez et al., 2024).
6. Summary Table: Paradigmatic Formula Types
| Domain | Coefficient Formula Structure | Reference |
|---|---|---|
| Orthogonal Poly (Jacobi/Stieltjes) | Three-term or continued-fraction, closed-form for special weights | Classical (not in above) |
| FPF: Galerkin | Linear system from basis inner products | (Taghvaei et al., 2016) |
| FPF: Kernel | Explicit finite sum with data-driven weights | (Taghvaei et al., 2016, Taghvaei et al., 2019) |
| FPF: Decomposition | Recursion with polynomial/Gaussian moments | (Wang et al., 31 Mar 2025) |
| PDE Backstepping | ODE/PDE, explicit boundary coefficient | (Vazquez et al., 2024) |
7. Concluding Remarks
Explicit formulas for continued-fraction coefficients are central wherever functional solutions are projected onto discrete bases or reconstructed via data-driven methods. Modern approaches in filtering, control, and kernel-based numerical algorithms routinely yield such formulas as finite-dimensional, explicit recursions or matrix equations. These representations are critical for achieving both computational tractability and rigorous error control in nonlinear, non-Gaussian, and high-dimensional statistical inference problems (Taghvaei et al., 2016, Taghvaei et al., 2019, Wang et al., 31 Mar 2025).