Papers
Topics
Authors
Recent
Search
2000 character limit reached

Invariant Measures with Entropy

Updated 13 January 2026
  • Invariant measures with entropy are probability measures preserved by a transformation that quantify system complexity through defined entropy functionals.
  • They underpin key results in ergodic theory, smooth and symbolic dynamics, PDEs, and mathematical physics by linking orbit separation with information rates.
  • Entropy-dimension relations and variational principles reveal that high entropy forces precise orbit separation, enabling optimization in both deterministic and stochastic analyses.

An invariant measure with entropy is a probability measure preserved by a flow or transformation that quantifies system complexity via a well-defined entropy functional. The interplay of invariance, entropy, and dynamical separation underpins key results in ergodic theory, smooth and symbolic dynamics, stochastic systems, PDEs, and mathematical physics. The entropy quantifies information per orbit and ties directly to geometric, probabilistic, statistical, and combinatorial properties of the underlying system.

1. Definitions: Invariant Measures and Entropy

Let (X,d)(X,d) be a compact metric space and f:XXf:X\to X a continuous map. A Borel probability measure μ\mu on XX is invariant if μ(f1A)=μ(A)\mu(f^{-1}A)=\mu(A) for all Borel sets AA. For dynamical systems, invariance extends to flows and semigroups, sometimes incorporating noise, symbolically as μ(PtA)=μ(A)\mu(P_tA)=\mu(A) for t0t\geq 0. A measure is ergodic if invariant sets have measure $0$ or $1$.

Entropy quantifies unpredictability and orbit separation:

  • Kolmogorov–Sinai entropy: For measurable partition α\alpha,

hμ(f,α)=limn1nHμ(i=0n1fiα),hμ(f)=supαhμ(f,α)h_\mu(f,\alpha)=\lim_{n\to\infty} \frac{1}{n} H_\mu\left( \bigvee_{i=0}^{n-1} f^{-i}\alpha\right),\qquad h_\mu(f)=\sup_\alpha h_\mu(f,\alpha)

where Hμ(α)=Bαμ(B)logμ(B)H_\mu(\alpha) = -\sum_{B\in\alpha} \mu(B) \log \mu(B).

  • Local/Brin–Katok entropy: For xXx\in X,

hμ(x)=limδ0lim supn1nlogμ(B[x,n,δ])h_\mu(x) = \lim_{\delta\to 0} \limsup_{n\to\infty} -\frac{1}{n}\log \mu(B[x,n,\delta])

with B[x,n,δ]={y:d(fi(x),fi(y))δ for i=0,,n1}B[x,n,\delta]=\{y: d(f^i(x),f^i(y))\leq\delta \text{ for } i=0,\dots,n-1\} (Arbieto et al., 2011, Ovadia et al., 2023).

In smooth flows (SDEs, PDEs), entropy can refer to differential entropy:

H(μ)=Xu(x)logu(x)dx\mathcal{H}(\mu) = -\int_X u(x)\log u(x)\,dx

where uu is the density of μ\mu (Li et al., 2016, Wang, 2013).

2. Measure-Theoretic and Geometric Expansivity

Carvalho–Rodrigues–Tahzibi–Varandas establish: Any ergodic invariant measure μ\mu with hμ(f)>0h_\mu(f)>0 admits δ>0\delta>0 such that for μ\mu-almost every xx, the forward dynamical ball

Bδf(x)={yX:d(fi(x),fi(y))δ i0}B^f_\delta(x)=\{y\in X: d(f^i(x),f^i(y))\leq\delta\ \forall i\geq0\}

has μ(Bδf(x))=0\mu(B^f_\delta(x))=0; i.e., μ\mu is measure-expansive (Arbieto et al., 2011). Therefore, high entropy enforces infinitesimal orbit separation almost everywhere, implying stable classes and wandering intervals have zero measure.

This links entropy to orbit configuration: stable sets Ws(p)={x:limnd(fn(x),fn(p))=0}W^s(p)=\{x: \lim_{n\to\infty} d(f^n(x),f^n(p))=0\} also carry no mass for μ\mu with hμ(f)>0h_\mu(f)>0. In systems with countably many stable classes or Lyapunov stability on their recurrent sets, htop(f)=0h_{\mathrm{top}}(f)=0.

Consequently, Li–Yorke chaotic sets, which exhibit both arbitrarily small and large separation under iteration, give rise to expansive invariant measures. The geometric separation enabled by positive entropy is robust across systems with strong orbit mixing.

3. Entropy–Dimension Relations: Local Formulas and Bounds

For C1+βC^{1+\beta} diffeomorphisms, neutralized local entropy hμN(x)h^N_\mu(x) is constructed from exponentially shrinking Bowen balls:

BnN(x,r)={y:d(fi(x),fi(y))ern, 0in}B_n^N(x,r) = \{ y: d(f^i(x), f^i(y)) \leq e^{-rn},\ 0\leq i\leq n \}

and

hμN(x)=limr0lim supn(1/n)logμ(BnN(x,r))h^N_\mu(x) = \lim_{r\to0} \limsup_{n\to\infty} (-1/n) \log \mu(B_n^N(x,r))

which, for μ\mu-almost every xx, coincides with Brin–Katok local entropy (Ovadia et al., 2023). This construction neutralizes nonuniformities and provides geometric metrics for entropy.

The pointwise dimension lower bound:

dμ(x)hμN(x)/χ(x)\underline{d}_\mu(x) \geq h^N_\mu(x) / \chi(x)

where χ(x)\chi(x) is the maximal Lyapunov exponent at xx, links metric entropy to fractal dimensions. For uniformly hyperbolic systems, the exact-dimensionality dμ(x)=du+dsd_\mu(x) = d^u + d^s recovers classical results (Ovadia et al., 2023, Condori et al., 2019).

4. Functional and Combinatorial Entropy: Symbolic, Logical, and Statistical Contexts

Symbolic dynamics and logic actions require different entropy frameworks:

  • On configuration spaces (e.g., ΩZd\Omega^{\mathbb{Z}^d}), entropy is often maximal for Gibbs measures extending locally translation-invariant marginals (Goldstein et al., 2015).
  • For invariant measures on LL-structures over N\mathbb{N}, the entropy function maps nn to the entropy of the restricted structure, with polynomial and sub-polynomial growth regimes governed by the logical complexity (Ackerman et al., 2018).
  • In stochastic processes (hidden Markov models), algebraic invariant measures admit explicit entropy rate calculations via Markov operator techniques, yielding integral expressions and practical numerical formulas for information rates (Marchand et al., 2011).

Maximal and minimal entropy extensions are classified via combinatorial tools (de Bruijn graphs, Gibbs–Shannon variational principles), with phase transitions and undecidability issues arising in higher-dimensional symbolic spaces.

5. Variational Principles and Constrained Optimization

The entropy of invariant measures is fundamentally tied to variational principles:

htop(f)=supμM(f)hμ(f)h_{\mathrm{top}}(f) = \sup_{\mu\in \mathcal{M}(f)} h_\mu(f)

where M(f)\mathcal{M}(f) is the set of invariant measures. The maximizing measures (often ergodic) capture the complexity inherent in the system.

Constrained ergodic optimization—maximizing integrals over subsets defined by entropy or pressure thresholds—shows that for generic observables in transitive systems with the shadowing property and upper semicontinuous entropy, the optimizer is unique, ergodic, with support XX, and achieves the prescribed entropy level (Lin et al., 2021). This has direct consequences for pressure, large deviations, and thermodynamic formalism.

Maximum entropy methods (e.g., for ACCIMs in open dynamical systems) select invariant measures by convex optimization of entropy subject to dynamical and moment constraints (Bose et al., 2012, Frank et al., 2010). These approaches extend naturally to flows, partially hyperbolic systems, and PDEs, where entropy maximization balances randomness and structural constraints (e.g., in Kan endomorphisms, multiple maximizing measures may coexist—two on boundaries, one in the interior (Núñez-Madariaga et al., 2020)).

6. Connections to Spectral, Statistical, and Geometric Properties

Zero entropy has sharp consequences: in one-dimensional continua (quasi-graphs, dendrites with finitely many endpoint accumulation points), all invariant measures have discrete (pure-point) spectrum (Li et al., 2018). Nonzero entropy typically correlates with continuous spectrum and the existence of horseshoes, horseshoe measures, and complex orbit structures.

In matrix cocycle dynamics, Lyapunov-optimizing invariant measures are zero-entropy under domination and non-overlapping conditions, but positive entropy arises if domination fails, even with zero Lyapunov exponents (Bochi et al., 2013).

For stochastic systems modeled by SDEs or Fokker–Planck equations, entropy relates to measure concentration versus attractor dimension (differential entropy scaling as (nd)logϵ(n-d)|\log \epsilon|) (Li et al., 2016), and Φ-entropy inequalities enable existence, uniqueness, and exponential convergence of invariant laws in jump-driven Lévy systems (Wang, 2013).

In Hamiltonian PDEs (Euler, Vlasov), invariant Young measures constructed via maximum mean-field entropy subject to dynamical invariants yield microcanonical measures, with uniqueness in convex, repulsive cases (Bouchet et al., 2010).

7. Extensions, Limitations, and Open Directions

The theory robustly generalizes to higher-dimensional, noncompact, or nonergodic settings only under additional hypotheses. Open questions persist on the density of ergodic measures in simplex closures (the Poulsen property for hereditary subshifts (Kułaga-Przymus et al., 2015)), entropy-dimension inequalities in nonuniformly hyperbolic systems, and undecidability of invariant extensions in high dimensions (Goldstein et al., 2015).

Variational and entropy-optimization principles remain central to advances in smooth dynamics, random systems, symbolic computation, and mathematical statistical physics. The continuing refinement of local entropy constructs, optimization under spectral and geometric constraints, and algorithms for entropy rate computation reflects the core role of invariant measures with entropy in understanding dynamical complexity and statistical structure.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Invariant Measures with Entropy.