Positive Semi-definite Functions
- Positive semi-definite functions are mappings that, when applied entrywise, preserve the positivity of matrices and kernel structures.
- The article reviews classical results like Schoenberg’s theorem and highlights criteria such as absolute monotonicity and fixed-dimension conditions.
- Applications span statistical covariance estimation, machine learning kernel design, and computational modeling, offering actionable insights.
A positive semi-definite (PSD) function is a mapping that, under defined rules, ensures that certain induced matrices or kernels remain positive semi-definite. The concept pervades matrix analysis, operator theory, probability, functional analysis, and machine learning, unifying a broad taxonomy of entrywise functions, kernel functions, and matrix-valued functionals that preserve positivity. This article surveys structural definitions, classical theorems, forbidden-block phenomena, dimension-dependent classifications, analytic characterizations, extension to multivariate and matrix-valued functions, and applications in statistical inference and computational modeling.
1. Definitions and Fundamental Properties
A function (or ), applied entrywise to an positive semi-definite matrix , is called a positivity preserver if is itself positive semi-definite for all in the cone of PSD matrices with entries from (Vishwakarma, 2020). For kernel functions , is PSD if for every and every , the matrix is PSD (Zhang, 2016). Analogous notions generalize to block-matrices and matrix-valued functions, where positivity demands are formulated over quadratic forms or block-structured kernels (Gesztesy et al., 2016).
The foundational Schur product theorem asserts that the Hadamard product of two PSD matrices is again PSD. Consequently, monomials and polynomials with non-negative coefficients preserve positivity under entrywise action (Vishwakarma, 2020, Zhang, 2016).
2. Classical Theorems: Absolute Monotonicity and Entrywise Preservers
Schoenberg's theorem, refined by Rudin, classifies all real or complex functions that preserve PSD under entrywise action across matrices of all sizes and entries in (or ). Namely, must admit a power-series expansion with all ; such functions are absolutely monotonic (Vishwakarma, 2020, Belton et al., 2015, Belton et al., 2016). In the multivariate setting, symmetric functions preserving positivity on block-partitioned matrices must have a multivariate power series with non-negative coefficients (Klotz et al., 2016).
These conditions are sharp and exclude any function with negative coefficients, as negative monomials will generate matrices lacking the PSD property for some configurations.
3. Dimension-Dependent and Fixed-Dimension Phenomena
The fixed-dimension problem investigates which functions preserve positivity when entrywise applied to matrices. This setting relaxes the constraint of absolute monotonicity, permitting certain negative coefficients under tight threshold bounds (Belton et al., 2015, Belton et al., 2016, Ji et al., 2016, Belton et al., 2016). Specifically, for polynomials , entrywise positivity preservation on is equivalent to:
- .
- For , , where is an explicit threshold involving binomial coefficients and Schur polynomial data (Belton et al., 2015).
A summary table for positivity-preserving polynomial coefficients in fixed dimension:
| Degree | Coefficient Condition | Comments |
|---|---|---|
| All lower-order terms must be non-neg. | ||
| tight lower bound | Threshold via combinatorial formula |
Cell decompositions of the PSD cone and Rayleigh-quotient formulations further elucidate these phenomena, demonstrating discontinuities and stratifications in critical-value maps and kernel structures (Belton et al., 2016, Belton et al., 2015).
4. Forbidden Block Structure and Novel PSD-Preservers
When functions are forbidden from acting on specific principal blocks (diagonal or otherwise), entirely new families of PSD-preserving functions arise. For diagonal blocks of size , the classical absolutely monotonic structure persists, but subject to the constraint for when is not applied to the diagonal (Vishwakarma, 2020).
The most striking phenomenon emerges when forbidden blocks have size and form a partition. Entrywise maps with preserve PSD. The allowance for negative when proves that such linear preservers are not absolutely monotonic, establishing the existence of dimension-free non-absolutely-monotonic PSD-preservers (Vishwakarma, 2020). Overlapping forbidden blocks collapse the structure back to the classical theory, restricting to absolutely monotonic maps.
5. PSD-Valued and Matrix-Valued Functions
Matrix-valued and PSD-valued functions generalize the scalar paradigm to maps or (Muzellec et al., 2021, Gesztesy et al., 2016). Positive semidefiniteness is defined via block matrices, requiring, for all , that is PSD. Conditional positive semidefiniteness demands nonnegativity on vectors with zero sum (Gesztesy et al., 2016).
A matrix-valued version of Schoenberg's theorem asserts that is conditionally positive semidefinite if and only if the Hadamard exponential is PSD for all , given by entrywise exponentiation (Gesztesy et al., 2016). However, in operator contexts, fails to be positivity-preserving for , even when the kernel property holds.
In the context of kernel sum-of-squares models, any PSD-valued function can be globally and uniformly approximated by functions of the form with and an RKHS feature map (Muzellec et al., 2021). This class is closed under universal approximation, convex function modeling via Hessian sum-of-squares representations, and provides practical and theoretical guarantees for learning under shape constraints.
6. Connections with Special Functions and Gram-Kernel Transformations
A unifying scheme identifies many special functions as PSD kernel generators. If admits an integral or series representation as a transform of a positive measure (Laplace, Fourier, Mellin, etc.), then the matrix or is PSD by Gram’s criterion or Bochner–Schoenberg theory (Zhang, 2016).
Classical examples include Gamma, Beta, hypergeometric, theta, elliptic, zeta, and modular functions. For each, the construction proceeds by identifying an orthogonal function family under a positive measure, forming the Gram matrix, and recognizing Schur-product closure. Such constructions not only produce combinatorial and analytic PSD kernels but also connect to the theory of positivity preservers via entrywise application and operator-theoretic perspectives.
7. Implications for Statistical, Computational, and Analytical Applications
The theory of PSD functions informs diverse disciplines including high-dimensional statistics, where nonlinear shrinkage estimators are required to preserve the covariance matrix's positive semidefiniteness (Belton et al., 2016, Belton et al., 2015). The explicit coefficient bounds and stratification structures enable precise regularization, graphical model estimation, and covariance matrix inference.
In computational mathematics, kernel sum-of-squares models offer algorithmic tractability for PSD- and convex-constrained learning, with efficient primal and dual reformulations and universally approximating families (Muzellec et al., 2021). The Gram-kernel paradigm underpins a broad spectrum of kernel methods and special function theory, spanning orthogonal polynomial ensembles to modular forms.
The analytic classification of entrywise and block-structured PSD-preservers and their fixed-dimension variants supplies rigorous design rules for preserving matrix positivity under transformations, ensuring validity of probabilistic and geometric modeling frameworks in contemporary research.