Weighted Dispersive Inequality
- Weighted dispersive inequality is a fundamental result that defines an explicit upper bound on the variance of a weighted sum using individual variances and assigned weights.
- It applies to correlated random variables without requiring independence or identical distribution, using extremal correlation bounds.
- Proofs utilize classical Cauchy–Schwarz arguments and positive semidefinite matrix techniques, with practical implications in risk aggregation and probabilistic limit theorems.
The weighted dispersive inequality is a foundational result in the study of the variability of weighted sums of correlated random variables. It provides an explicit upper bound for the variance of such sums, expressed solely in terms of the individual variances and the weights, regardless of the structure or sign of the covariances. This inequality has significant implications for areas where weighted linear combinations of random variables arise, including survey sampling, risk aggregation, and probabilistic limit theorems. Notably, the result does not require independence, identical distribution, or even specific covariance bounds, except via extremal correlation arguments. The inequality also admits direct proofs using both classical Cauchy–Schwarz arguments and positive semidefinite matrix theory and extends to general linear combinations with arbitrary weights (Liu, 2012).
1. Statement of the Weighted Dispersive Inequality
Let be random variables with finite variances, , and let be nonnegative weights such that . For the weighted sum , the main result is
This bound holds without assumptions on the independence or identically distributed nature of the , and applies even under arbitrary correlation structures. When weights are arbitrary real numbers , one defines and ; for ,
2. Methods of Proof
Two distinct proof strategies establish the upper bound:
(a) Cauchy–Schwarz Argument:
The variance of the weighted sum expands as
Applying the Cauchy–Schwarz inequality and the algebraic inequality to each covariance term, the cross-terms are majorized and, upon recombination, the total sum is bounded above by the convex combination of individual variances.
(b) Positive Semidefinite Matrix Technique:
Letting and , define the matrix with entries , , and vector . The inequality is equivalent to establishing that . The proof constructs the comparison matrix (with ), verifies that is positive semidefinite, and uses majorization to conclude must also be positive semidefinite, yielding the desired variance bound.
3. Generalization to Arbitrary Weights
For general real weights , the random sum can be written as where and . Since , the original nonnegative-weight result applies to . Multiplying through by gives the general upper bound
4. Assumptions and Theoretical Context
The only condition required on the random variables is . Neither independence nor identical distribution is assumed, and covariances may be arbitrary, provided only the general Cauchy–Schwarz inequality is respected. The proofs leverage basic properties of correlation matrices and the positive semidefiniteness of the constructed centering matrix, and do not demand further structural information about the joint distribution of .
5. Applications to Chebyshev's Inequality and the Weak Law of Large Numbers
The weighted dispersive inequality tightens Chebyshev-type probability inequalities for correlated weighted sums. For any ,
In the special case , the sufficient condition for convergence in probability (the Weak Law of Large Numbers, WLLN) becomes
This suffices for convergence of the empirical mean to the mean, even in the absence of independence among the , provided the average variance vanishes.
6. Practical Implications and Limitations
The inequality ensures that the variance of a convex or general linear combination of correlated random variables cannot exceed the weighted sum of their individual variances, regardless of covariances. This is pivotal in scenarios involving composite signals, risk portfolios, or stratified sampling, enabling risk control or error estimation under minimal stochastic assumptions. However, a principal limitation is that the result leverages only extremal covariance bounds via Cauchy–Schwarz or maximal possible correlation, potentially overlooking tighter bounds obtainable with finer covariance information. If additional constraints on the sign or magnitude of are known, sharper variance estimates may be feasible. The extension to arbitrary real weights is relevant for fast bounding of variance in unconstrained linear combinations.
7. Summary of Theorems and Corollaries
| Theorem/Corollary | Summary | Conditions |
|---|---|---|
| Theorem 1 (Nonnegative Weights) | , | |
| Theorem 2 (Matrix Proof) | Matrix-based positive semidefiniteness proof | Same as above |
| Theorem 5 (General Weights) | upper bound via | |
| Theorem 6–7 (Chebyshev, WLLN) | Application to deviation probability and sufficient WLLN | No independence required |
The weighted dispersive inequality provides a universally applicable variance upper bound for (possibly correlated) weighted sums, with broad utility in probability, statistics, and applied domains (Liu, 2012).