Papers
Topics
Authors
Recent
Search
2000 character limit reached

Regularity-Constrained Value Functions

Updated 30 January 2026
  • Value functions with regularity constraints are functions that quantify minimal cost-to-go in control problems while enforcing smoothness properties like Lipschitz continuity.
  • They arise in diverse settings—from semilinear PDE control to stochastic games—where regularity ensures well-posedness and robust feedback synthesis.
  • Regularity constraints facilitate numerical approximations and sensitivity analysis by guaranteeing bounded derivatives and smooth behavioral transitions.

A value function with regularity constraints quantifies the minimal cost-to-go in an optimal control or decision problem, while explicitly enforcing or certifying the smoothness, continuity, or Lipschitz properties of the value function. Regularity constraints arise from intrinsic features of the data (e.g., control or state constraints), imposed structural restrictions (e.g., on the Hessian or gradient of an approximating value function), or play a critical role in enabling well-posedness, feedback synthesis, and the mathematical and numerical analysis of dynamic programming equations.

1. Mathematical Formulation and Canonical Settings

Value functions subject to regularity constraints naturally appear in deterministic and stochastic infinite-dimensional control, finite-dimensional parametric optimization, and stochastic games. A prototypical setting is the infinite-horizon optimal stabilization problem for a semilinear parabolic PDE: yt=Ay+F(y)+Bu,y(0)=y0,y_t = A y + F(y) + B u,\quad y(0) = y_0, with cost functional

V(y0)=infuUad0(y(t),u(t))eλtdt,(y,u)=12yY2+α2uU2,V(y_0) = \inf_{u\in U_{\rm ad}} \int_0^\infty \ell(y(t),u(t)) e^{-\lambda t} dt, \qquad \ell(y,u) = \frac12\|y\|_Y^2 + \frac\alpha2\|u\|_U^2,

and norm-constrained controls u(t)Ur\|u(t)\|_U \leq r for almost every tt (Kunisch et al., 2021).

Regularity constraints can also be imposed directly during value function approximation, as in value-gradient iteration with quadratic models, where the value function V^(x)=xPx+2qx+r\hat V(x) = x^\top P x + 2q^\top x + r is learned under explicit bounds on the Hessian and gradient: mIP+PMI,(P+P)x+2qGmax, for xX.m I \preceq P+P^\top \preceq M I, \quad \| (P+P^\top) x + 2q \|_\infty \leq G_{\max}, \text{ for } x\in\mathcal{X}. Such constraints guarantee strong convexity and Lipschitz continuity (Yang et al., 2023).

Regularity-proper value functions also arise as solutions to Hamilton–Jacobi–Bellman (HJB) or Isaacs equations—classical, viscosity, or semiconcave/semiconvex—where the existence and uniqueness of solutions hinge on regularity properties determined by the problem's constraints and data regularity (Zhou, 2013, Zhou, 2011, Krylov, 2012).

2. Abstract Regularity Frameworks and Sufficient Conditions

General strategies to guarantee value function regularity are based on embedding the control problem into a parameter-dependent optimization framework. For a value function V(q)=infxCf(x)V(q) = \inf_{x\in C} f(x) subject to e(x,q)=0e(x,q) = 0, abstract hypotheses ensure local C1C^1 regularity of the value function:

  • Twice continuous differentiability of ff, ee in xx
  • Regular point condition: 00\in interior of derivative images
  • Second-order sufficient conditions (uniform coercivity of the Lagrangian's restricted Hessian)
  • Lipschitz stability and invertibility of the linearized system
  • Compatibility of derivatives and subgradients with associated Banach spaces
  • Uniform Lipschitz continuity/continuity of data and derivatives

Applying this abstract machinery in a semilinear parabolic PDE context (with x=(y,u)x=(y,u), q=y0q=y_0, C=UadC=U_{\rm ad}) yields that the value function V:YRV:Y\to \mathbb{R} is locally C1C^1, with

V(y0)=p(0),V'(y_0) = -p(0),

where p()p(\cdot) is the adjoint state from the Pontryagin system (Kunisch et al., 2021).

For finite-dimensional stochastic or deterministic problems, similar envelope-theorem and parametric regularity results yield differentiability and Lipschitz bounds, provided convexity, lower semicontinuity, and differentiability of costs, together with compactness of feasible sets (Franc et al., 2022, Pablo et al., 2020).

3. Hamilton–Jacobi–Bellman Equations and Feedback Synthesis

Once value function regularity is ensured, the dynamic programming principle leads to classical HJB equations for deterministic or stochastic systems with constraints: λV(y)Ay+F(y),DV(y)YH(y,DV(y))=0,\lambda V(y) - \langle A y + F(y), D V(y) \rangle_Y - H(y, D V(y)) = 0, where the Hamiltonian incorporates control constraints: H(y,p)=supuUr{Bu,pY(y,u)}.H(y,p) = \sup_{\|u\|_U \leq r} \left\{ -\langle B u, p \rangle_Y - \ell(y,u) \right\}. In the quadratic-constrained case (Kunisch et al., 2021), the optimal feedback is

u(y)=PUad(1αBDV(y)),u^*(y) = P_{U_{\rm ad}}\left(-\frac{1}{\alpha} B^* D V(y) \right),

where PUadP_{U_{\rm ad}} denotes the Hilbert-space projection onto the control-constraint set.

In stochastic control and dynamic games, the (viscosity) value function solves the (degenerate) Bellman or Isaacs equation with boundary or state constraints. Global and local C1,1C^{1,1} or C0,1C^{0,1} regularity can be established under control-Lipschitz data, nondegeneracy, and suitable barrier conditions (Zhou, 2011, Zhou, 2013, Krylov, 2012).

4. Special Cases: Games, Discrete Schemes, and Approximation

Discrete dynamic programming schemes for stochastic games with local regularity constraints yield value functions with explicit C1,γC^{1,\gamma} or Hölder regularity estimates. In noisy tug-of-war games (interpreting inhomogeneous pp-Laplace and fully nonlinear equations), the value functions VεV_\varepsilon of the discrete game satisfy Pucci-type extremal inequalities, ensuring local or global Hölder regularity with exponents depending on ellipticity constants (Blanc et al., 2022, Han, 2024, Ruosteenoja, 2014).

For approximating value functions of Isaacs equations in stochastic differential games, Krylov constructs smooth approximants vKv_K with globally bounded second derivatives: D2vKL(Rd)CK,vKvCK,\| D^2 v_K \|_{L^\infty(\mathbb{R}^d)} \leq C K,\qquad |v_K - v| \leq \frac{C}{K}, enabling the use of regular solutions for numerical schemes and theoretical analysis (Krylov, 2012).

5. Applications: Regularization, Sensitivity, and Optimization

Regularity constraints are central in regularization and sensitivity analysis. In convex optimization with regularizers or parametric constraints, the sensitivity of the value function to a regularization parameter λ\lambda is characterized by the derivative

Vp(λ)=R(x(λ)),V_p'(\lambda) = R(x(\lambda)),

where x(λ)x(\lambda) solves the penalized problem. Under suitable invertibility conditions, the value functions of penalized and constrained forms are mutual inverses (Aravkin et al., 2012).

In multistage stochastic optimization, Moreau–Yosida regularization yields differentiable approximants of parametric value functions, with explicit backward recursion for gradients, and uniform Lipschitz constants on the regularization parameter (Franc et al., 2022). This enables efficient gradient-based outer-loop policy optimization.

6. Limitations, Extensions, and Open Questions

Regularity typically holds only locally (e.g., for small initial data in nonlinear PDEs) or on dense open subsets (in singular or degenerate control), and pointwise constraints (e.g., pathwise in the state or control) complicate second-order analysis. Methods to extend regularity theory to degenerate, bilinear, or nonconvex settings leverage viscosity, probabilistic, or game-theoretic techniques, often requiring additional geometric or viability conditions or strong stabilization (Barilari et al., 2016, Kunisch et al., 2023, Rosestolato et al., 2015).

Regularization techniques (such as Moreau envelopes, Hessian bounds in value iteration) systematically trade off sharpness for smoothness to ensure practical derivability, policy robustness, and tractability of numerical methods (Yang et al., 2023, Franc et al., 2022). Sharp characterizations of entire regions of non-smoothness, and the quantitative dependence of regularity exponents on problem parameters, remain significant directions of future research.

Selected Literature Table

Setting Regularity Achieved Main Conditions
Semilinear PDE control (Kunisch et al., 2021, Kunisch et al., 2023) Local C1C^1 in L2(Ω)L^2(\Omega) or H1(Ω)H^1(\Omega) Analytic semigroup, Fréchet-differentiable nonlinearity, coercivity, small initial data
Finite/stochastic DP (Franc et al., 2022, Pablo et al., 2020) Global/local Lipschitz, C1C^1 Convexity, compactness, differentiability, Moreau regularization
SDE games/Isaacs (Zhou, 2013, Krylov, 2012) Local C0,1C^{0,1}, Lipschitz Uniform ellipticity, controllable coefficients, geometric barrier
Value function approximation (Yang et al., 2023) C1C^1/C2C^2 with bounded Hessian Convex quadratic fitting, LMI constraints, sample complexity
Parametric convex programming (Aravkin et al., 2012) C1C^1, sensitivity via multipliers Proper/closed/convex data, dual attainment/invertibility

In summary, value functions with regularity constraints integrate analytic, functional, and algorithmic properties central to modern control and optimization, unifying sensitivity analysis, feedback synthesis, and constructive approximation under the umbrella of regularity theory.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (14)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Value Functions with Regularity Constraints.