Papers
Topics
Authors
Recent
Search
2000 character limit reached

Weighted Dispersive Inequality

Updated 27 December 2025
  • Weighted dispersive inequality is a fundamental result that defines an explicit upper bound on the variance of a weighted sum using individual variances and assigned weights.
  • It applies to correlated random variables without requiring independence or identical distribution, using extremal correlation bounds.
  • Proofs utilize classical Cauchy–Schwarz arguments and positive semidefinite matrix techniques, with practical implications in risk aggregation and probabilistic limit theorems.

The weighted dispersive inequality is a foundational result in the study of the variability of weighted sums of correlated random variables. It provides an explicit upper bound for the variance of such sums, expressed solely in terms of the individual variances and the weights, regardless of the structure or sign of the covariances. This inequality has significant implications for areas where weighted linear combinations of random variables arise, including survey sampling, risk aggregation, and probabilistic limit theorems. Notably, the result does not require independence, identical distribution, or even specific covariance bounds, except via extremal correlation arguments. The inequality also admits direct proofs using both classical Cauchy–Schwarz arguments and positive semidefinite matrix theory and extends to general linear combinations with arbitrary weights (Liu, 2012).

1. Statement of the Weighted Dispersive Inequality

Let X1,,XnX_1, \ldots, X_n be random variables with finite variances, Var(Xi)<\operatorname{Var}(X_i) < \infty, and let w1,,wnw_1, \ldots, w_n be nonnegative weights such that i=1nwi=1\sum_{i=1}^n w_i = 1. For the weighted sum Z=i=1nwiXiZ = \sum_{i=1}^n w_i X_i, the main result is

Var(i=1nwiXi)i=1nwiVar(Xi).\operatorname{Var}\left(\sum_{i=1}^n w_i X_i\right) \leq \sum_{i=1}^n w_i \operatorname{Var}(X_i).

This bound holds without assumptions on the independence or identically distributed nature of the XiX_i, and applies even under arbitrary correlation structures. When weights are arbitrary real numbers λiR\lambda_i \in \mathbb{R}, one defines W=i=1nλiW = \sum_{i=1}^n |\lambda_i| and wi=λi/Ww_i = |\lambda_i| / W; for Z=i=1nλiXiZ = \sum_{i=1}^n \lambda_i X_i,

Var(i=1nλiXi)(i=1nλi)(i=1nλiVar(Xi)).\operatorname{Var}\left(\sum_{i=1}^n \lambda_i X_i\right) \leq \left(\sum_{i=1}^n |\lambda_i|\right) \left(\sum_{i=1}^n |\lambda_i| \operatorname{Var}(X_i)\right).

2. Methods of Proof

Two distinct proof strategies establish the upper bound:

(a) Cauchy–Schwarz Argument:

The variance of the weighted sum expands as

Var(Z)=i=1nwi2Var(Xi)+21i<jnwiwjCov(Xi,Xj).\operatorname{Var}(Z) = \sum_{i=1}^n w_i^2 \operatorname{Var}(X_i) + 2\sum_{1 \leq i < j \leq n} w_i w_j \operatorname{Cov}(X_i, X_j).

Applying the Cauchy–Schwarz inequality Cov(Xi,Xj)Var(Xi)Var(Xj)|\operatorname{Cov}(X_i, X_j)| \leq \sqrt{\operatorname{Var}(X_i)\operatorname{Var}(X_j)} and the algebraic inequality ab12(a+b)\sqrt{ab} \leq \tfrac{1}{2}(a+b) to each covariance term, the cross-terms are majorized and, upon recombination, the total sum is bounded above by the convex combination of individual variances.

(b) Positive Semidefinite Matrix Technique:

Letting σi=Var(Xi)\sigma_i = \sqrt{\operatorname{Var}(X_i)} and ρij=Cov(Xi,Xj)/(σiσj)\rho_{ij} = \operatorname{Cov}(X_i, X_j)/(\sigma_i \sigma_j), define the matrix AA with entries Aii=wiA_{ii} = w_i, Aij=wiwjρij (ij)A_{ij} = -w_i w_j \rho_{ij}\ (i \neq j), and vector σ=(σ1,,σn)T\sigma = (\sigma_1, \ldots, \sigma_n)^T. The inequality is equivalent to establishing that D=σTAσ0D = \sigma^T A \sigma \geq 0. The proof constructs the comparison matrix BB (with ρij=+1\rho_{ij} = +1), verifies that BB is positive semidefinite, and uses majorization to conclude AA must also be positive semidefinite, yielding the desired variance bound.

3. Generalization to Arbitrary Weights

For general real weights λ1,,λn\lambda_1, \ldots, \lambda_n, the random sum Z=i=1nλiXiZ = \sum_{i=1}^n \lambda_i X_i can be written as Z=Wi=1nwiX~iZ = W \sum_{i=1}^n w_i \widetilde X_i where wi=λi/Ww_i = |\lambda_i|/W and X~i=sgn(λi)Xi\widetilde X_i = \operatorname{sgn}(\lambda_i) X_i. Since Var(X~i)=Var(Xi)\operatorname{Var}(\widetilde X_i) = \operatorname{Var}(X_i), the original nonnegative-weight result applies to i=1nwiX~i\sum_{i=1}^n w_i \widetilde X_i. Multiplying through by W2W^2 gives the general upper bound

Var(Z)(i=1nλi)(i=1nλiVar(Xi)).\operatorname{Var}(Z) \leq \left(\sum_{i=1}^n |\lambda_i|\right) \left(\sum_{i=1}^n |\lambda_i| \operatorname{Var}(X_i)\right).

4. Assumptions and Theoretical Context

The only condition required on the random variables is Var(Xi)<\operatorname{Var}(X_i) < \infty. Neither independence nor identical distribution is assumed, and covariances Cov(Xi,Xj)\operatorname{Cov}(X_i,X_j) may be arbitrary, provided only the general Cauchy–Schwarz inequality is respected. The proofs leverage basic properties of correlation matrices and the positive semidefiniteness of the constructed centering matrix, and do not demand further structural information about the joint distribution of (X1,,Xn)(X_1,\ldots,X_n).

5. Applications to Chebyshev's Inequality and the Weak Law of Large Numbers

The weighted dispersive inequality tightens Chebyshev-type probability inequalities for correlated weighted sums. For any δ>0\delta > 0,

PrZE[Z]δVar(Z)δ2i=1nwiVar(Xi)δ2.\operatorname{Pr}\Big|Z - \mathbb{E}[Z]\Big| \geq \delta \leq \frac{\operatorname{Var}(Z)}{\delta^2} \leq \frac{\sum_{i=1}^n w_i \operatorname{Var}(X_i)}{\delta^2}.

In the special case wi=1/nw_i = 1/n, the sufficient condition for convergence in probability (the Weak Law of Large Numbers, WLLN) becomes

1ni=1nVar(Xi)0(n).\frac{1}{n}\sum_{i=1}^n \operatorname{Var}(X_i) \to 0 \quad (n \to \infty).

This suffices for convergence of the empirical mean to the mean, even in the absence of independence among the XiX_i, provided the average variance vanishes.

6. Practical Implications and Limitations

The inequality ensures that the variance of a convex or general linear combination of correlated random variables cannot exceed the weighted sum of their individual variances, regardless of covariances. This is pivotal in scenarios involving composite signals, risk portfolios, or stratified sampling, enabling risk control or error estimation under minimal stochastic assumptions. However, a principal limitation is that the result leverages only extremal covariance bounds via Cauchy–Schwarz or maximal possible correlation, potentially overlooking tighter bounds obtainable with finer covariance information. If additional constraints on the sign or magnitude of Cov(Xi,Xj)\operatorname{Cov}(X_i,X_j) are known, sharper variance estimates may be feasible. The extension to arbitrary real weights is relevant for fast bounding of variance in unconstrained linear combinations.

7. Summary of Theorems and Corollaries

Theorem/Corollary Summary Conditions
Theorem 1 (Nonnegative Weights) Var(wiXi)wiVar(Xi)\operatorname{Var}(\sum w_iX_i) \leq \sum w_i \operatorname{Var}(X_i) wi0w_i\ge0, wi=1\sum w_i=1
Theorem 2 (Matrix Proof) Matrix-based positive semidefiniteness proof Same as above
Theorem 5 (General Weights) Var(λiXi)\operatorname{Var}(\sum \lambda_i X_i) upper bound via λi|\lambda_i| λiR\lambda_i \in \mathbb{R}
Theorem 6–7 (Chebyshev, WLLN) Application to deviation probability and sufficient WLLN No independence required

The weighted dispersive inequality provides a universally applicable variance upper bound for (possibly correlated) weighted sums, with broad utility in probability, statistics, and applied domains (Liu, 2012).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Weighted Dispersive Inequality.