Papers
Topics
Authors
Recent
Search
2000 character limit reached

Power-Sampling Methods: Theory and Applications

Updated 25 January 2026
  • Power-sampling methods are techniques that select samples based on statistical power or energy efficiency, enhancing estimation and reconstruction even in rare event scenarios.
  • They employ frameworks like mixture-importance sampling, adaptive control algorithms, and D-optimal experimental design to achieve low variance and computational efficiency.
  • Applications include grid reliability, embedded control, compressive spectrum estimation, and ADC energy harvesting, delivering significant improvements in performance and cost reduction.

Power-sampling methods encompass a range of sampling frameworks, estimators, and optimization strategies in which the statistical power, energetic efficiency, or signal power considerations fundamentally guide the sample selection, acquisition rate, or reconstruction algorithm. These techniques have broad application in signal processing, power system reliability, embedded control, compressive sensing, and statistical experimental design. Below, key frameworks and methodological advances from the literature are synthesized, with an emphasis on the probabilistic, algorithmic, and physical principles underpinning power-sampling.

1. Rare Event Importance Sampling and Power System Reliability

A prominent application of power-sampling arises in the estimation of rare event probabilities in high-dimensional systems, such as reliability assessment in electric grids. The ALOE mixture-importance-sampling algorithm ("At-Least-One-rare-Event"), as formalized in Owen & Zhou (2019), offers an unbiased estimator for the probability μ\mu of the union of JJ rare events HjH_j defined over a random variable xx (Owen et al., 2017).

Given event probabilities Pj=P(Hj)P_j=\mathbb{P}(H_j) and union bound μˉ=j=1JPj\bar\mu=\sum_{j=1}^{J} P_j, the mixture-sampling density is

q(x)=j=1JPjμˉp(x)1Hj(x)Pj=p(x)j=1J1Hj(x)μˉq(x) = \sum_{j=1}^J \frac{P_j}{\bar\mu} \frac{p(x)1_{H_j}(x)}{P_j} = \frac{p(x)\sum_{j=1}^J1_{H_j}(x)}{\bar\mu}

Each sample is weighted by w(x)=μˉ/S(x)w(x)=\bar\mu/S(x) where S(x)S(x) is the number of active rare events at xx. Key properties include:

  • Unbiasedness: E[μ^n]=μ\mathbb{E}[\hat\mu_n]=\mu.
  • Sharp variance bound: Var(μ^n)μ(μˉμ)/n\operatorname{Var}(\hat\mu_n)\leq\mu(\bar\mu-\mu)/n.
  • Relative error bounded by O(J/n)O(\sqrt{J/n}). This method achieves empirical CV 0.0024\approx 0.0024 with n=104n=10^4 samples on events with probabilities <1022<10^{-22} under thousands of constraints.

These power-sampling schemes underpin adaptive and scenario-based formulations in chance-constrained DC-OPF for grid security (Lukashevich et al., 2021), where sample-efficient scenario generation is achieved by mixture importance sampling conditional on constraint violations.

2. Power-aware Adaptive Sampling for Embedded Control

In embedded control, sampling rate regulation is critically linked to control performance and power consumption. The adaptive regulation algorithms developed by Naskar et al. pursue online selection of control task sampling periods to minimize quadratic LQG cost under an energy budget (Raha, 2018):

  • Mathematical model: sampling period hjHh_j\in H for disturbance/noise level ljLl_j\in L; total energy E(α)E(\alpha) and average control cost J(α)\mathcal{J}(\alpha).
  • Dominance pruning and greedy-knapsack algorithms select a mapping M:LH\mathcal{M}:L\to H.
  • Optimality guarantees for pruning; O(knlogn)O(kn\log n) complexity for greedy heuristic.
  • Quantitative results: 10–30% cost improvement over fixed-rate controllers, up to 50% power reduction, and substantial battery-life extension (up to $17$ months).

Assumptions are linear time-invariant dynamics and constant per-sample energy; limitations of the current approaches involve scalability to non-linear plants, absence of delay modeling, and lack of formal worst-case guarantees for greedy heuristics.

3. Compressive and Noncompressive Power Spectrum Sampling

Blind and non-blind sub-Nyquist power-spectrum estimation is a key area for power-sampling, notably in cognitive radio and wideband spectrum sensing (Cohen et al., 2013, Lexa et al., 2011). In compressive spectral estimation, sampling patterns are chosen to optimize recovery guarantees or minimize system complexity:

  • Multi-coset (periodic nonuniform) sampling: Piecewise-constant PSD is estimated from pp channel samples at average rate fs=(p/L)Wf_s = (p/L) W.
  • Under sparsity, compressive estimators (standard 1\ell_1, nonnegative least squares) exploit measurement matrices with subsampled DFT structure; noncompressive (least squares) estimators rely on full-rank system matrices.
  • The minimal sampling rate for exact reconstruction varies:
    • Non-sparse: ftot>fNyq/2f_\mathrm{tot}>f_\mathrm{Nyq}/2.
    • Sparse, support known: ftot>NsigBf_\mathrm{tot}>N_\mathrm{sig} B (half Landau rate).
    • Sparse, support unknown (blind): ftot>2NsigBf_\mathrm{tot}>2N_\mathrm{sig} B.
  • NNLS is particularly efficient if the power spectrum is nonnegative and sparse. Algorithmic complexity, recovery error bounds, and trade-offs in resolution versus sampling rate are extensively characterized.

Recent advances in generalized coprime sampling reconstruct autocorrelation and power spectrum from sub-Nyquist samples via FFT-based algorithms. These methods maintain performance and speed for high-dimensional, distributed, or real-time applications (Jiang et al., 2023).

4. Statistical Experimental Design: Power in Sample Selection

Optimal sampling design for regression and time series under computational or energy constraints leverages the criterion of statistical power—in this context, the information matrix determinant ("D-optimality"). Streaming leverage-score sampling optimizes information acquisition under a sampling rate constraint (Xie et al., 2023):

  • Sampling probability s(x)=1xΣx1x>rs(x)=1_{x^\prime \Sigma_x^{-1}x > r} enforces D-optimality for i.i.d. elliptical covariates.
  • Mixture rules combine hard thresholding with a Bernoulli baseline for robustness.
  • Empirically, leverage-score and relaxed-leverage sampling halve estimation error and prediction RMSE vis-à-vis baseline random sampling, with a $10$–20×20\times reduction in floating-point cost.

This methodology is shown to outperform uniform random sampling in online grid load estimation/prediction, and extensions to heavy-tailed or heteroscedastic settings are empirically validated.

5. Physical Power and Sampling: ADC Energy Harvesting

The "eSampling" framework establishes a direct link between signal power, sampling rate, and energy harvest/consumption in ADC architectures (Jain et al., 2020). Key results include:

  • For bandlimited WSS inputs, sampling at Nyquist rate with SAR ADCs up to $12$ bits allows operation in net-zero or energy-positive regimes.
  • Analytic fidelity–harvest tradeoffs: sampled signal distortion (NMSE) versus harvested energy EhE_h and conversion cost EholdE_{\rm hold}, given by

Eratio=EhEhold=ηRhThσx2a2(n)K2σx2+a1(n)KσxE_{\rm ratio} = \frac{E_h}{E_{\rm hold}} = \frac{\frac{\eta}{R_h} T_h \sigma_x^2}{a_2(n)K^2\sigma_x^2 + a_1(n)K\sigma_x}

  • Circuit implementation with 65 nm CMOS corroborates theory: measured energy ratio >12>12 dB at 8 bit, 40 MHz rates. Guidelines are provided for sampling rate and quantization setting selection under joint fidelity and energy harvest constraints.

6. Sampling Strategies, Correlation Structures, and Coverage

Power-sampling is applied in the context of training data generation for power grid surrogates (Balduin et al., 2022):

  • The Correlation Sampling algorithm enforces strong dependencies in the sample generation space by adjusting samples according to partial correlation matrices computed from historical data.
  • This approach achieves superior coverage of feasible operational states, covering 95%95\% of the convex hull of real data (vs. 70%70\% for copula methods).
  • It differentiates itself from SRS and LHS by retaining space-filling properties and realistic dependencies while being less restrictive than full joint-copula sampling.

Empirical evaluation highlights the benefits in ML surrogate training, with improved generalization and reduction in coverage bias.

7. Advanced Statistical Power Estimation: Bioequivalence Simulation

Segment-based simulation for bioequivalence test power curves exploits inverse-CDF mapping to focus simulation effort efficiently (Hagar et al., 2023):

  • Power as a tail probability in the sampling distribution is approximated by mapping test statistics onto rejection regions, then root-finding sample-size boundaries per simulation draw.
  • The approach achieves unbiased sample-size recommendations and up to 10×10\times speedup over brute-force simulation, applicable across a variety of clinical and nonparametric designs.

The combination of quasi-Monte Carlo sampling and efficient root-finding ensures both statistical validity and computational tractability.


Power-sampling methods define a geometric and probabilistic framework where sample selection is dictated not only by classical statistical efficiency, but also by physical power constraints, rare event structure, spectral energy densities, and correlation topology. Across disciplines—statistical inference, control, grid reliability, spectrum sensing, and hardware design—these principles enable scalable, energy-efficient, statistically powerful sampling strategies.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Power-Sampling Methods.