Dynkin Game with Partial Information
- Dynkin game with partial information is a framework where one or more players operate with asymmetric observations and rely on randomized stopping times.
- The approach employs augmented state spaces and filtering techniques to update beliefs and derive explicit equilibrium conditions.
- Applications span finance, economics, and engineering, addressing optimal stopping problems under uncertainty and hidden drift factors.
A Dynkin game with partial information is a two-player optimal stopping game in continuous or discrete time, in which at least one player does not have direct access to the full state or structural parameters of the payoff process. This foundational class in stochastic game theory generalizes classical Dynkin games by introducing uncertainty or information asymmetries into the players’ observation schemes, leading to strategic interactions where actions must be calibrated not only to state evolution but also to belief updates about hidden game variables or opponent actions.
1. General Model Framework
A canonical Dynkin game with partial information is formulated on a filtered probability space . Each player acts with respect to their own subfiltration , which may encode incomplete or asymmetric access to the driving processes. Payoff structure is defined by (possibly random) processes , representing payments for early stopping, opponent's preemption, or simultaneous stopping events, respectively, typically constrained by for all .
Unlike the full information case, each player's strategy is limited to randomized or pure stopping times adapted only to their own observed filtration. Randomized stopping times often take the form
where is a right-continuous, nondecreasing, subfiltration-adapted process, and is independent randomization (Angelis et al., 2020).
Payoff to, say, Player 1, over stopping strategies is
Partial information may arise from hidden Markov regimes, unobserved drift coefficients, random clocks, or from not knowing the opponent's very presence ("ghost competition") (Angelis et al., 2019).
2. Representation and Reduction to Markovian or Filtering Problems
The interplay of optimal stopping and partial information typically necessitates an augmented state including both observable quantities and a sufficient statistic for the agent’s posterior beliefs. E.g., when an asset price evolves under unobserved drift as
the agents observe and filter the hidden state via the conditional law , leading to a coupled system
where , are jointly Markov (Angelis et al., 2017).
Optimal strategies and value functions are then characterized on the extended state , with stopping boundaries expressed as functions of current belief, as opposed to just the underlying process.
Analogous filtering-based reductions arise in models of asymmetric information (only one player knows a latent type or drift), in which the uninformed party uses the observed process and inaction history to update beliefs, often leading to a controlled diffusion for the belief process (Angelis et al., 2018).
3. Existence, Value, and Equilibrium Structure
Under broad assumptions—bounded and integrable payoffs, right-continuous and complete filtrations, and the possibility of randomized strategies—zero-sum Dynkin games with partial information admit a value in mixed strategies: with optimal randomised stopping times constructed from nondecreasing, adapted processes (Angelis et al., 2020, Angelis et al., 17 Oct 2025).
In frameworks where state reduction and regularity permit, value functions are characterized as unique bounded continuous viscosity solutions to fully nonlinear variational inequalities of the form
where encodes posterior probabilities over hidden regimes and is the minimal directional second derivative over the simplex's tangent cone (Grün, 2012).
Nash equilibria have concrete descriptions in structured partial information games: for instance, in preemptive games with ghost competitors, equilibrium strategies are characterized by regions in belief–state space, with explicit reflection or jump-to-boundary structures, and randomization is essential; best response dynamics can be reduced to one-player optimal stopping against a parameterized hazard function (Angelis et al., 2019).
4. Martingale and Functional Analytic Approaches
A unifying methodology rests on martingale and Doob-Meyer decomposition techniques. Key auxiliary submartingale/supermartingale systems associated with conditional value processes guide the verification of optimality and the identification of equilibrium strategies under arbitrary (even non-Markovian) information flows (Angelis et al., 17 Oct 2025).
Sion’s min–max theorem provides existence by recasting the problem into bilinear functions over convex, weakly compact sets of generating processes (control-type objects), and verifying the required continuity and convexity properties for the payoff functional (Angelis et al., 2020).
The construction of (sub/super-)martingale systems, their Doob-Meyer decompositions, and the optional projections of unobservable payoffs onto each player's filtration are critical in non-classical games, ensuring the value and revealing that actions are only taken when the conditional value process coincides with instantaneous payoff (Angelis et al., 17 Oct 2025).
5. Special Cases and Illustrative Models
The literature covers diverse instantiations.
- Preemption with latent competition: Two agents act under uncertainty about each other’s presence. Posterior beliefs on opponent activity, given no observed stopping, evolve deterministically conditioned on the hazard process. Equilibria partition state-space into “no-action”, “action”, and “stopping” zones, with partial information driving reflection-type mixed strategies and explicit algebraic relations (Angelis et al., 2019).
- Drift learning in option games: Both agents observe a price, but must infer unobserved drift. Filtering creates a two-dimensional Markov structure and moving optimal boundaries in the state-belief plane. Stopping sets, regularity of value function, and PDE structure are all described in terms of joint asset and belief dynamics (Angelis et al., 2017).
- Asymmetric information on system type: Only one agent knows the “regime”; the opponent must filter beliefs over time and may need to use randomized stopping strategies to hedge against information leakage. Equilibrium characterization reduces to coupled quasi-variational inequalities and smooth-fit conditions, with value functions explicit in certain linear models (Angelis et al., 2018).
- Independent observations with incomplete information: If agents act on independent Brownian motions and observe only their own paths, the lack of information about the opponent can yield nontrivial equilibria where even infinite expected payoffs may arise unless reward growth conditions are carefully controlled. For linearly growing payoffs, immediate stopping by at least one player is necessary for finiteness (Gaitsgori et al., 2023).
6. Open Problems and Practical Implications
The theory readily extends to non-Markovian settings, arbitrary filtration structures, vector-valued regimes, random time horizons, and piecewise regular or singular payoffs. Care is required: the absence of the “second-mover advantage” () or monotonic jump conditions can destroy game value or preclude the existence of pure strategy equilibria (Angelis et al., 2020). Randomized stopping is essential for achieving the value in general.
Applications of Dynkin games with partial information appear in mathematical finance (real options with hidden competition, option pricing under unknown drift), economics (market entry/preemption with incomplete knowledge), and engineering (sequential detection with unobserved regimes). The analytical apparatus developed—including belief process diffusion, Hamilton–Jacobi–Bellman equations in augmented state, and functional-analytic existence proofs—has influenced related areas such as stochastic control under model uncertainty and differential games with incomplete information.
7. Summary Table: Key Features Across Partial Information Dynkin Games
| Feature | Representative Paper(s) | Key Phenomenon |
|---|---|---|
| Filtering/augmented state | (Angelis et al., 2017, Angelis et al., 2018) | Belief-driven Markovian reduction |
| Pure vs. randomized stops | (Angelis et al., 2019, Angelis et al., 17 Oct 2025) | Randomization needed for equilibrium |
| Non-Markovian payoffs | (Angelis et al., 2020, Angelis et al., 17 Oct 2025) | Martingale/minimax construction |
| Viscosity PDEs | (Grün, 2012, Angelis et al., 2018) | Variational inequalities on beliefs |
| Infinite payoff scenarios | (Gaitsgori et al., 2023) | Boundary hitting, discount/growth |
This synthesis reflects the structural and methodological diversity of Dynkin games with partial information, highlighting analytical techniques, equilibrium features, and the necessity of sophisticated belief and filtration management in optimal stopping under uncertainty.