Papers
Topics
Authors
Recent
Search
2000 character limit reached

Chebyshev Polynomial Approximations

Updated 9 February 2026
  • Chebyshev polynomial approximations are techniques that express functions through sums of Chebyshev polynomials, ensuring near-optimal interpolation and rapid convergence on bounded intervals.
  • They leverage key properties such as orthogonality, a three-term recurrence, and efficient node selection to minimize errors like the Runge phenomenon in numerical integration and spectral methods.
  • Applications include solving differential equations, performing signal processing on graphs, and stabilizing deep network layers, with extensions to multivariate and rational approximations.

Chebyshev polynomial approximations refer to the representation and approximation of functions on bounded intervals, most classically [1,1][-1, 1], by sums or expansions involving Chebyshev polynomials. These polynomials possess favorable extremal, orthogonality, and computational properties, resulting in efficient schemes for interpolation, numerical integration, spectral methods for differential equations, signal processing, and machine learning.

1. Definition and Fundamental Properties of Chebyshev Polynomials

The Chebyshev polynomials of the first kind, Tn(x)T_n(x), are defined by

Tn(x)=cos(narccosx),x[1,1], n=0,1,2,T_n(x) = \cos(n\,\arccos x), \qquad x \in [-1,1],\ n=0,1,2,\dots

They satisfy the three-term recurrence: T0(x)=1,T1(x)=x,Tn+1(x)=2xTn(x)Tn1(x).T_0(x)=1,\qquad T_1(x)=x,\qquad T_{n+1}(x)=2x\,T_n(x)-T_{n-1}(x). These polynomials solve the Sturm–Liouville equation: ddx((1x2)dydx)+n2y=0,\frac{d}{dx}\left((1-x^2)\frac{dy}{dx}\right) + n^2 y = 0, exhibiting a deep connection to harmonic analysis. Orthogonality holds under the Chebyshev weight: 11Tm(x)Tn(x)dx1x2={π,m=n=0, π2,m=n1, 0,mn.\int_{-1}^1 T_m(x)\,T_n(x)\frac{dx}{\sqrt{1-x^2}} = \begin{cases} \pi, & m=n=0, \ \frac{\pi}{2}, & m=n\ge1, \ 0, & m\ne n. \end{cases} A related set, the Chebyshev polynomials of the second kind, Un(x)U_n(x), are given by

Un(x)=sin((n+1)arccosx)sin(arccosx),U_n(x) = \frac{\sin((n+1)\arccos x)}{\sin(\arccos x)},

orthogonal with respect to 1x2\sqrt{1-x^2}.

The generating function is: G(x,t)=n=0Tn(x)tn=1xt12xt+t2.G(x,t) = \sum_{n=0}^\infty T_n(x) t^n = \frac{1-x\,t}{1 - 2x\,t + t^2}.

Parseval’s identity holds: 11[f(x)]21x2dx=πa02+π2n=1an2\int_{-1}^1 \frac{[f(x)]^2}{\sqrt{1-x^2}}\,dx = \pi\,a_0^2 + \frac{\pi}{2} \sum_{n=1}^\infty a_n^2 for the Chebyshev expansion coefficients ana_n (Karjanto, 2020).

2. Chebyshev Series and Interpolation

Any fL1/1x22[1,1]f\in L^2_{1/\sqrt{1-x^2}}[-1,1] can be expressed as a (possibly infinite) Chebyshev series: f(x)=n=0anTn(x)f(x) = \sum_{n=0}^\infty a_n T_n(x) with coefficients

a0=1π11f(x)1x2dx,an=2π11f(x)Tn(x)1x2dx(n1).a_0 = \frac{1}{\pi}\int_{-1}^1 \frac{f(x)}{\sqrt{1-x^2}}dx, \qquad a_n = \frac{2}{\pi}\int_{-1}^1 \frac{f(x)\,T_n(x)}{\sqrt{1-x^2}}dx\quad (n\ge 1).

Chebyshev interpolation at the so-called Chebyshev–Gauss–Lobatto nodes: xk=cos(2k12Nπ), k=1,2,,N,x_k = \cos\left( \frac{2k-1}{2N}\pi \right),\ \quad k = 1,2,\ldots,N, gives near-minimax polynomial interpolation and mitigates the Runge phenomenon exhibited by equispaced interpolation.

The interpolant is: PN(x)=n=0NcnTn(x),P_N(x) = \sum_{n=0}^N c_n T_n(x), where coefficients cnc_n are computed via a discrete cosine transform: c0=1Nk=1Nf(xk),cn=2Nk=1Nf(xk)Tn(xk), n1.c_0 = \frac{1}{N} \sum_{k=1}^N f(x_k),\quad c_n = \frac{2}{N} \sum_{k=1}^N f(x_k)T_n(x_k),\ n\geq 1. Uniform error estimates for CN+1C^{N+1} functions: fPNM(N+1)!2N1\|f - P_N\|_\infty \leq \frac{M}{(N+1)! 2^{N-1}} where M=max[1,1]f(N+1)M = \max_{[-1,1]} |f^{(N+1)}| (Karjanto, 2020).

3. Convergence and Error Rates

Smooth Functions: For functions analytic in a Bernstein ellipse, Chebyshev coefficients decay exponentially, and the partial sum error satisfies ffNCρN\|f - f_N\|_\infty \leq C \rho^{-N} for some ρ>1\rho > 1 (Tang et al., 2019).

Functions with Bounded Variation: When f(k)f^{(k)} is of bounded variation, Chebyshev coefficients aja_j decay as O(jk1)O(j^{-k-1}), and the L1L^1 error for degree NN approximation satisfies O(Nk1)O(N^{-k-1}) with explicit constants (Akansha, 2024).

Endpoint Singularities and Basis Choice: For u(x)=g(x)(1x2)φ[ln(1x2)]θu(x) = g(x)\,(1-x^2)^{\varphi} [\ln(1-x^2)]^{\theta} with φ>1/2\varphi > 1/2, Chebyshev, difference, and quadratic-factor basis coefficients decay asymptotically as O(nκ)O(n^{-\kappa}), O(n(κ1))O(n^{-(\kappa-1)}), and O(n(κ2))O(n^{-(\kappa-2)}) respectively with κ=2φ+1\kappa = 2\varphi + 1. Standard Chebyshev truncations incur boundary-layer errors, while bases encoding Dirichlet BC yield uniform error distribution (Zhang et al., 2021).

Taylor-like Bounds: For example, for the Chebyshev expansion of exe^x on [1,1][-1,1], explicit upper and lower polynomial approximants LN(x)exUN(x)L_N(x)\leq e^x \leq U_N(x) for x0x\leq 0 can be derived via auxiliary inequalities involving Bessel functions and Chebyshev polynomials of both kinds (Wodecki, 2024).

4. Multivariate and Weighted Chebyshev Approximation

Bivariate Approximations: For f(x,y)f(x, y) on [1,1]2[-1,1]^2, the expansion

f(x,y)i=0j=0aijTi(x)Tj(y)f(x,y) \sim \sum_{i=0}^\infty \sum_{j=0}^\infty a_{ij}T_i(x)T_j(y)

converges uniformly when fC2([1,1]2)f\in C^2([-1,1]^2), and coefficients are given by double orthogonality integrals. Fast algorithms use the 2D FFT on Chebyshev nodes. The uniform remainder decays at O(1/m+1/n)O(1/m + 1/n) with m,nm, n the polynomial degrees, with sharper O((m1)2)O((m-1)^{-2}), O((n1)2)O((n-1)^{-2}) coefficient decay in pure directions (Scheiber, 2015).

Hyperbolic Cross and Numerical Differentiation: High-dimensional differentiation is stabilized by truncating Chebyshev expansions to hyperbolic crosses. For ff in bivariate weighted Wiener classes BWs,2(μ1,μ2)BW_{s,2}^{(\mu_1,\mu_2)}, specific choices of truncation parameter nn minimize the total error, resulting in error bounds in weighted LqL^q norms of explicit algebraic form in terms of the noise level and smoothness parameters (Kyselov et al., 30 Jan 2026).

Adaptive Partitioning: Adaptive partition-of-unity frameworks recursively split domains, fitting low-degree tensor-product Chebyshev expansions locally and combining via smooth bump functions, yielding a global CC^\infty approximation with spectral or near-spectral convergence, automatic anisotropy adaptation, and performance advantages especially in higher dimensions or for functions with localized sharp features (Aiton et al., 2018).

5. Applications and Algorithms

Spectral Methods for PDEs and BVPs: Chebyshev collocation methods solve high-order boundary value problems by reducing to first-order systems, expanding unknowns in Chebyshev series, and collocating at Chebyshev clustered nodes. The solution converges spectrally, with direct imposition of boundary conditions and efficient differentiation via sparse recurrence matrices (Bhowmik, 2014).

Distributed Signal and Graph Processing: Shifted and scaled Chebyshev polynomials approximate graph filters h(L)h(L), avoiding spectral decompositions. With LL the graph Laplacian, L~=(2/λmax)LI\tilde{L} = (2/\lambda_{\max}) L - I, so that its spectrum fits [1,1][-1,1]. The matrix polynomial

hK(L)=12c0I+k=1KckTk(L~)h_K(L) = \frac12 c_0 I + \sum_{k=1}^K c_k T_k(\tilde{L})

can be efficiently and fully distributedly evaluated via the three-term recurrence. Error decays rapidly for smooth filters; cost scales as O(KN)O(KN) for sparse graphs (Shuman et al., 2011).

Stable Deep Networks: Chebyshev coefficient truncation yields robust function approximation layers in deep networks (ChebNets). These constructions achieve spectral accuracy with depth O(logN)O(\log N), width O(N)O(N), and conditioning O(N)O(N), outperforming power-series-based RePU architectures for large NN in both stability and accuracy (Tang et al., 2019).

Alias-free Differentiation: Least-squares constrained mock-Chebyshev operators use a subset of nodes mimicking Chebyshev-Lobatto points, combining interpolation and regression to control the operator norm and reduce the Runge phenomenon; derivative approximation (even of high order) is accurate up to O(n)O(\sqrt n) derivatives for nn data points (Dell'Accio et al., 2022).

Rational and Hermite–Chebyshev Theories: Rational Chebyshev approximants, including (linear/nonlinear) Hermite–Chebyshev and Padé–Chebyshev constructions, extend polynomial approximation to quotient spaces, balancing uniform accuracy with specialized properties (e.g., simultaneous interpolation, endpoint constraints, or best rational approximation under shrinking domains). These approaches admit explicit determinantal formulas and connect closely to classical rational-approximation theory (Jawecki, 2024, Starovoitov et al., 21 Jul 2025).

6. Error Bounds, Filtering, and Computational Aspects

Tail Probability and Monomials: The Chebyshev expansion of xnx^n provides a truncation error expressible exactly as a tail sum of binomial coefficients, with a probabilistic interpretation: the error is twice the probability that a symmetric random walk deviates by more than mm from nn steps. Using Hoeffding bounds, E(n,m)2exp(m2/(2n))E(n, m) \leq 2 \exp(-m^2/(2n)), so error decays subexponentially in m2/nm^2/n (Saibaba, 2021).

Filtered Interpolation: Applying de la Vallée Poussin (VP) filters to Chebyshev interpolation controls the Lebesgue constant and attains uniform convergence in weighted Jacobi norms. The filtered interpolants maintain near-best approximation error with explicit necessary and sufficient conditions on Jacobi weights; increasing the filter strength mitigates the Gibbs phenomenon while preserving global convergence rates (Occorsio et al., 2020).

Efficient Polynomial Evaluation and Root-Finding: The Clenshaw algorithm provides O(n)O(n) evaluation of Chebyshev expansions. Interval ball-arithmetic variants control error growth (quadratic rather than exponential in nn) when evaluating on intervals, enabling rigorous root isolation schemes with complexity O(n3)O(n^3) in the worst case and O(n2)O(n^2) practical performance for well-separated roots (Ledoux et al., 2019).

Weighted and Regularized Minimax Approximation: In estimation problems (e.g., for support size), weighted Chebyshev polynomial approximation (with or without regularization) optimally trades bias and variance, yielding efficient convex programs with O(logk)O(\log k) dimension and matching minimax rates for suitable choices of weight (I et al., 2019).

7. Extensions and Generalizations

Generalized Chebyshev-II and Sobolev Orthogonality: The Chebyshev polynomials of the second kind and their generalizations admit expansions in the Bernstein basis, possess orthogonality under Sobolev-type measures (including point masses at endpoints), and enable interpolation and approximation results that connect to L2L^{2} spaces and weighted polynomial inequalities (AlQudah, 2015).

Uniform Approximation for D-finite and Complex-Valued Functions: Rigorous Chebyshev expansion methods for D-finite functions, utilizing block-Clenshaw algorithms and validated functional enclosures, provide uniform (near-minimax) approximations with explicit complexity and error bounds, covering solutions to linear ODEs with polynomial coefficients (Benoit et al., 2014).

Multiseries Hermite–Chebyshev Approximants: The theory of linear and nonlinear Hermite–Chebyshev rational approximations gives determinant-based existence and uniqueness criteria even in the case of multiple (possibly vector-valued) functions, reducing the problem to full-rank conditions on structured Hankel–Toeplitz matrices (Starovoitov et al., 21 Jul 2025).


Chebyshev polynomial approximations, encompassing both theoretical and algorithmic aspects, provide one of the most effective frameworks for the stable, rapidly convergent, and computationally efficient approximation of functions on bounded intervals. Their impact spans classical numerical analysis, numerical PDEs, signal processing on graphs, and modern machine learning architectures, with continuing extensions to multivariate domains, non-classical weights, generalized orthogonalities, and rational function approximations.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (19)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Chebyshev Polynomial Approximations.