Extremal Alexandrov estimates: singularities, obstacles, and stability
Abstract: The classical Alexandrov estimate controls the oscillation of a convex function by the mass of its associated Monge-Ampère measure and yields, for two convex functions of $n$ variables with the same boundary values, a sup-norm bound with exponent $1/n$ in the measure discrepancy. We show that this exponent is not optimal in the small-discrepancy regime once one of the functions is non-degenerate in the sense of having Monge-Ampère density bounded above and below by two positive constants. We prove sharp quantitative estimates comparing two convex functions by the total variation of the difference of their Monge-Ampère measures: in dimensions $n\ge 3$ the optimal dependence is quadratic in the natural mass scale, while in dimension $n=2$ the optimal dependence contains a logarithmic correction. These rates are shown to be optimal for all small discrepancies. A key structural ingredient is a characterization of extremizers. We identify the pointwise minimizers and maximizers in the admissible class and prove that they are realized, respectively, by solutions to Monge-Ampère equations with an isolated singularity and by solutions to Monge-Ampère equations with a linear obstacle. This extremal description reduces the sharp estimates to a precise asymptotic analysis of these two model configurations. Assuming further that the domain and the non-degenerate reference function are $C{2,α}$ and uniformly convex, we obtain sharp pointwise two-sided asymptotics at interior points with explicit leading constants. Finally, in dimensions $n\ge 3$ we establish a stability phenomenon: if the pointwise estimate is nearly saturated, then the measure discrepancy must concentrate near the point at the natural scale, quantifying rigidity of almost-extremal configurations.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.