Papers
Topics
Authors
Recent
Search
2000 character limit reached

Wavelet Latent Position ERG Models

Updated 23 December 2025
  • WL-ERGs is a multiscale statistical network model that employs compactly supported wavelet expansions to represent log-odds connectivity in vertex-indexed graphs.
  • The model enables multiresolution inference by capturing localized connectivity departures through sparse coefficient estimation and hard thresholding.
  • It unifies exchangeable logistic graphons and latent space models, supporting phase transition detection and minimax optimal recovery in complex networks.

Wavelet Latent Position Exponential Random Graphs (WL-ERGs) are a multiscale statistical network model that generalizes logistic graphons through a compactly supported orthonormal wavelet expansion of the log-odds connectivity kernel. Designed for vertex-indexed networks with observed positioning or embedding, such as spatial, anatomical, or otherwise ordered graphs, WL-ERGs allow explicit, interpretable modeling of connectivity departures from a baseline across distinct resolutions and locations. The core innovation lies in representing the log-odds kernel in wavelet coordinates indexed by both scale and location, producing a framework that is simultaneously exchangeable, interpretable, and suitable for rigorous multiresolution inference, detection, and estimation (Papamichalis et al., 21 Dec 2025).

1. Model Specification and Notation

Each vertex ii is equipped with a latent position xi[0,1]dx_i \in [0,1]^d, where d1d \geq 1. The edge structure is specified by a logistic graphon with log-odds function η:[0,1]d×[0,1]dR\eta : [0,1]^d \times [0,1]^d \rightarrow \mathbb{R}, linking latent positions to edge probabilities according to

P(Aij=1xi,xj)=σ(η(xi,xj)),σ(t)=11+et,P(A_{ij} = 1 \mid x_i, x_j) = \sigma(\eta(x_i, x_j)), \quad \sigma(t) = \frac{1}{1+e^{-t}},

where AijA_{ij} denotes the adjacency matrix entry.

The log-odds kernel is expanded in a compactly supported orthonormal wavelet basis {ψj,k}j,k\{\psi_{j,k}\}_{j,k} for [0,1]d[0,1]^d, where jj indexes scale and kk indexes spatial location: xi[0,1]dx_i \in [0,1]^d0 or, in a shorthand notation, xi[0,1]dx_i \in [0,1]^d1. Coefficients xi[0,1]dx_i \in [0,1]^d2 govern the magnitude of departures from a baseline (typically xi[0,1]dx_i \in [0,1]^d3). Edge probabilities follow by

xi[0,1]dx_i \in [0,1]^d4

2. Multiscale Wavelet Representation

The wavelet basis structure is central to the WL-ERG framework. For each scale xi[0,1]dx_i \in [0,1]^d5, the system xi[0,1]dx_i \in [0,1]^d6 is orthonormal, and each xi[0,1]dx_i \in [0,1]^d7 is supported on a set of diameter xi[0,1]dx_i \in [0,1]^d8. For xi[0,1]dx_i \in [0,1]^d9, the basis functions integrate to zero, making them "detail" or "difference" components. The spatial location index d1d \geq 10 runs over d1d \geq 11 translates at scale d1d \geq 12, providing spatial localization.

Sparsity in the coefficient array d1d \geq 13 directly encodes that the connectivity structure is mostly constant except for a small number of localized and scale-specific perturbations. Thus, if most d1d \geq 14 are zero, the resulting network is nearly homogeneous; the presence of a few nonzero coefficients corresponds to interpretable, multiresolution deviations. This property supports direct, interpretable recovery of network modularity or anomalies at multiple scales.

3. Exponential-Family Truncations and Sufficient Statistics

Finite truncation to scales d1d \geq 15 yields a model

d1d \geq 16

which defines, conditional on latent positions d1d \geq 17, an exponential family over adjacency matrices d1d \geq 18 with canonical parameters d1d \geq 19. The model distribution is

η:[0,1]d×[0,1]dR\eta : [0,1]^d \times [0,1]^d \rightarrow \mathbb{R}0

where

η:[0,1]d×[0,1]dR\eta : [0,1]^d \times [0,1]^d \rightarrow \mathbb{R}1

The sufficient statistics are multiscale wavelet interaction counts between vertex pairs, permitting a maximum-entropy characterization: the model is the unique maximizer of Shannon entropy among all distributions with prescribed expectations for η:[0,1]d×[0,1]dR\eta : [0,1]^d \times [0,1]^d \rightarrow \mathbb{R}2 and η:[0,1]d×[0,1]dR\eta : [0,1]^d \times [0,1]^d \rightarrow \mathbb{R}3.

4. Estimation and Coefficient-Space Regularization

Empirical estimation begins with the observed adjacency η:[0,1]d×[0,1]dR\eta : [0,1]^d \times [0,1]^d \rightarrow \mathbb{R}4 and latent positions η:[0,1]d×[0,1]dR\eta : [0,1]^d \times [0,1]^d \rightarrow \mathbb{R}5: η:[0,1]d×[0,1]dR\eta : [0,1]^d \times [0,1]^d \rightarrow \mathbb{R}6 A maximal scale η:[0,1]d×[0,1]dR\eta : [0,1]^d \times [0,1]^d \rightarrow \mathbb{R}7 is selected with η:[0,1]d×[0,1]dR\eta : [0,1]^d \times [0,1]^d \rightarrow \mathbb{R}8, along with a threshold η:[0,1]d×[0,1]dR\eta : [0,1]^d \times [0,1]^d \rightarrow \mathbb{R}9. Hard thresholding is performed: P(Aij=1xi,xj)=σ(η(xi,xj)),σ(t)=11+et,P(A_{ij} = 1 \mid x_i, x_j) = \sigma(\eta(x_i, x_j)), \quad \sigma(t) = \frac{1}{1+e^{-t}},0 and all finer-scale coefficients are set to zero. The composite estimate for the log-odds kernel is then

P(Aij=1xi,xj)=σ(η(xi,xj)),σ(t)=11+et,P(A_{ij} = 1 \mid x_i, x_j) = \sigma(\eta(x_i, x_j)), \quad \sigma(t) = \frac{1}{1+e^{-t}},1

Near-minimax rates are achieved: if the true kernel belongs to a Besov ball of smoothness P(Aij=1xi,xj)=σ(η(xi,xj)),σ(t)=11+et,P(A_{ij} = 1 \mid x_i, x_j) = \sigma(\eta(x_i, x_j)), \quad \sigma(t) = \frac{1}{1+e^{-t}},2 and sparsity parameter P(Aij=1xi,xj)=σ(η(xi,xj)),σ(t)=11+et,P(A_{ij} = 1 \mid x_i, x_j) = \sigma(\eta(x_i, x_j)), \quad \sigma(t) = \frac{1}{1+e^{-t}},3, then

P(Aij=1xi,xj)=σ(η(xi,xj)),σ(t)=11+et,P(A_{ij} = 1 \mid x_i, x_j) = \sigma(\eta(x_i, x_j)), \quad \sigma(t) = \frac{1}{1+e^{-t}},4

which matches the minimax rate under multiscale sparsity, and similarly for coefficient estimation: P(Aij=1xi,xj)=σ(η(xi,xj)),σ(t)=11+et,P(A_{ij} = 1 \mid x_i, x_j) = \sigma(\eta(x_i, x_j)), \quad \sigma(t) = \frac{1}{1+e^{-t}},5 This supports likelihood-based regularization and thresholding directly in coefficient space.

5. Expressivity, Universality, and Multiscale Detection

Every logistic graphon P(Aij=1xi,xj)=σ(η(xi,xj)),σ(t)=11+et,P(A_{ij} = 1 \mid x_i, x_j) = \sigma(\eta(x_i, x_j)), \quad \sigma(t) = \frac{1}{1+e^{-t}},6 such that P(Aij=1xi,xj)=σ(η(xi,xj)),σ(t)=11+et,P(A_{ij} = 1 \mid x_i, x_j) = \sigma(\eta(x_i, x_j)), \quad \sigma(t) = \frac{1}{1+e^{-t}},7 admits expansion in any orthonormal wavelet basis, establishing the WL-ERG as universal over square-integrable logistic graphons.

WL-ERGs encode phase transitions for recovery and detection at each scale. For hierarchical block models, at scale P(Aij=1xi,xj)=σ(η(xi,xj)),σ(t)=11+et,P(A_{ij} = 1 \mid x_i, x_j) = \sigma(\eta(x_i, x_j)), \quad \sigma(t) = \frac{1}{1+e^{-t}},8 the effective signal-to-noise ratio is defined as

P(Aij=1xi,xj)=σ(η(xi,xj)),σ(t)=11+et,P(A_{ij} = 1 \mid x_i, x_j) = \sigma(\eta(x_i, x_j)), \quad \sigma(t) = \frac{1}{1+e^{-t}},9

where AijA_{ij}0 is the average connection probability at scale AijA_{ij}1 and AijA_{ij}2 the perturbation. If AijA_{ij}3, the wavelet-based label recovery at scale AijA_{ij}4 succeeds with vanishing error; if AijA_{ij}5 constant, no estimator outperforms random guessing. For detection of a localized bump of amplitude AijA_{ij}6 on a group of AijA_{ij}7 vertices, the detection limit is reached at AijA_{ij}8. Wavelet scan statistics adaptively achieve these boundaries across scales.

6. Band-Limited Regimes and Large Deviations

Imposing a band-limited regime for parameters—restricting to a finite band AijA_{ij}9 in wavelet space, with {ψj,k}j,k\{\psi_{j,k}\}_{j,k}0—ensures strong non-degeneracy properties typical of well-behaved exponential random graph models (ERGM). For any {ψj,k}j,k\{\psi_{j,k}\}_{j,k}1, edge density {ψj,k}j,k\{\psi_{j,k}\}_{j,k}2 concentrates within a nontrivial interval {ψj,k}j,k\{\psi_{j,k}\}_{j,k}3 with probability {ψj,k}j,k\{\psi_{j,k}\}_{j,k}4. Subgraph frequencies converge almost surely to their population analogs, preserving cut-metric convergence and bounding frequencies away from zero and one.

The normalized multiscale interaction vector {ψj,k}j,k\{\psi_{j,k}\}_{j,k}5 satisfies a large deviation principle with rate function

{ψj,k}j,k\{\psi_{j,k}\}_{j,k}6

with

{ψj,k}j,k\{\psi_{j,k}\}_{j,k}7

and {ψj,k}j,k\{\psi_{j,k}\}_{j,k}8. The dual function {ψj,k}j,k\{\psi_{j,k}\}_{j,k}9 is strictly convex and analytic, implying canonical exponential tilts and rare-event rates are stable, and precluding the degeneracies common in classical ERGMs.

7. Connections, Applicability, and Theoretical Significance

WL-ERGs directly unify concepts from exchangeable logistic graphons, wavelet-based multiresolution analysis, conditional exponential-family structure, and sparse recovery. They clarify the relationships and distinctions between block models, latent space models, small-world graphs, and general graphon formulations by supplying a canonical multiresolution parameterization accessible to interpretation and regularization. The framework admits phase transition analysis for detection at different resolutions, supports likelihood-based regularization/testing, and facilitates rigorous, scale-adaptive recovery with minimax optimality under natural regularity.

Applications include, but are not limited to, spatial networks, connectomics, and any domain where spatial or geometric vertex ordering provides interpretable structure for multiresolution connectivity analysis (Papamichalis et al., 21 Dec 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Wavelet Latent Position Exponential Random Graphs (WL-ERGs).