Papers
Topics
Authors
Recent
Search
2000 character limit reached

BEDA: Belief Estimation as Probabilistic Constraints for Performing Strategic Dialogue Acts

Published 31 Dec 2025 in cs.CL, cs.GT, and cs.MA | (2512.24885v1)

Abstract: Strategic dialogue requires agents to execute distinct dialogue acts, for which belief estimation is essential. While prior work often estimates beliefs accurately, it lacks a principled mechanism to use those beliefs during generation. We bridge this gap by first formalizing two core acts Adversarial and Alignment, and by operationalizing them via probabilistic constraints on what an agent may generate. We instantiate this idea in BEDA, a framework that consists of the world set, the belief estimator for belief estimation, and the conditional generator that selects acts and realizes utterances consistent with the inferred beliefs. Across three settings, Conditional Keeper Burglar (CKBG, adversarial), Mutual Friends (MF, cooperative), and CaSiNo (negotiation), BEDA consistently outperforms strong baselines: on CKBG it improves success rate by at least 5.0 points across backbones and by 20.6 points with GPT-4.1-nano; on Mutual Friends it achieves an average improvement of 9.3 points; and on CaSiNo it achieves the optimal deal relative to all baselines. These results indicate that casting belief estimation as constraints provides a simple, general mechanism for reliable strategic dialogue.

Summary

  • The paper introduces BEDA, a framework that models belief estimation as probabilistic constraints to generate strategic dialogue acts.
  • It formalizes adversarial and alignment dialogue acts using game-theoretic constraints and demonstrates improvements across various simulated tasks.
  • Empirical results highlight significant performance gains, including up to 20.6-point improvements in adversarial scenarios and enhanced efficiency in negotiations.

Belief Estimation as Probabilistic Constraints for Strategic Dialogue: A Comprehensive Analysis of BEDA

Introduction and Motivation

Strategic dialogue in multi-agent environments often requires advanced forms of theory-of-mind reasoning, including the estimation of others’ beliefs and the capacity to generate utterances that strategically leverage those beliefs. While prior work demonstrates the importance of belief estimation, the mechanism by which inferred beliefs are purposefully operationalized in utterance generation remains underexplored. The BEDA (Belief Estimation as probabilistic constraints for Dialogue Acts) framework addresses this gap by explicitly casting belief states as probabilistic constraints—structurally linking Theory-of-Mind modeling with controllable, goal-directed dialogue act execution. The framework supports both adversarial (misleading/distracting opponents) and alignment (fostering common ground) dialogue behaviors, each mathematically formalized as constrained generation tasks.

(Figure 1)

Figure 1: An overview of the BEDA framework using the Keeper-Burglar Game. The world set encodes shared structure; belief estimation yields probabilistic models over events; the conditional generator strategically selects and realizes utterances consistent with the selected dialogue act.

Formalization of Dialogue Acts as Probabilistic Constraints

A distinguishing theoretical contribution of BEDA is the rigorous game-theoretic formulation of two foundational dialogue acts:

  • Adversarial Dialogue Act: An utterance is adversarial if it asserts events that the speaker is confident in but are outside the support of the addressee’s current beliefs. Formally, this is encoded by two probabilistic constraints: high speaker confidence (PA(E)≥1−ϵP_A(E) \geq 1 - \epsilon), and high probability that the addressee does not know EE (PA(¬KBE)≥1−ϵP_A(\neg K_{B}E) \geq 1 - \epsilon).
  • Alignment Dialogue Act: An utterance is alignment if it asserts events highly likely to be true and mutually known (PA(KBE)≥1−ϵP_A(K_{B}E) \geq 1 - \epsilon). This promotes convergence, mutual trust, and efficient coordination.

These act types reduce dialogue generation to a constrained optimization problem, decoupling language fluency from explicit belief-structural control. The world set provides state/event structure; belief estimation (supervised neural classifiers) computes agent- and opponent-belief probabilities; the conditional generator (LLM or smaller model) samples utterances conditional on feasible events.

Methodology and Experimental Design

BEDA’s pipeline involves three modular stages:

  1. World Set Construction: Define a set of events governing the environment's latent structure.
  2. Belief Estimation: Train context-sensitive discriminators (e.g., BERT encoders) to output probabilistic assignments for self and opponent beliefs over the world set.
  3. Conditional Generation: The LLM receives context, selected events, and the agent’s dialogue act intent, generating utterances constrained by the act-specific feasibility region.

BEDA’s effectiveness is evaluated in three task archetypes:

  • Conditional Keeper–Burglar Game (CKBG): Measures adversarial capacity—misdirecting opponent agents via strategic deception.
  • Mutual Friends (MF): A cooperative paradigm where the goal is efficient partner alignment on mutually held knowledge.
  • CaSiNo: A mixed negotiation with resource trading and preference inference, requiring both alignment and adversarial acts.

Across all scenarios, BEDA is instantiated with different backbone LLMs (GPT-3.5, GPT-4, LLaMA2, Qwen2.5), and metrics include success rate, turn efficiency, and reward in agreement (for negotiation).

Empirical Results

Adversarial Interaction: Keeper–Burglar Game

BEDA outperforms all baselines (including CoT prompting and self-reflection) in adversarial misdirection tasks, achieving absolute success rate gains up to 20.6 points over baseline when equipped with GPT-4.1-nano, and minimum gains of 5.0 points across other backbones. Performance dwarfs that of random belief variants, highlighting that accurately estimated beliefs as constraints are critical, and unreliable belief models undermine adversarial intent.

Alignment in Cooperative Games: Mutual Friends

In cooperative identification, BEDA robustly improves both success rate and communication efficiency (success/turn and success/token). For instance, BEDA achieves relative gains up to +30.4 points over baseline on GPT-3.5. The framework not only boosts overall accuracy but reduces average dialogue length, reflecting improved diagnosticity and prioritization of key information. Baselines that naively inject all belief information or rely on open-ended reasoning (CoT) yield suboptimal outcomes—underscoring the importance of structured constraint-driven generation.

Mixed-Objective Negotiation: CaSiNo

In mixed alignment/competition negotiation, BEDA yields the highest mean utility in agreed negotiations and sustains strong agreement rates across both closed- and open-source backbones. Unlike MindDial, which only introduces beliefs as additional context, BEDA’s probabilistic constraint mechanism leads to superior deal quality, demonstrating effective control over dialogue dynamics in complex multi-objective settings. Figure 2

Figure 2

Figure 2: CaSiNo—BEDA achieves optimal or near-optimal average agreement reward, outperforming methods that inject belief context without constrained usage.

Analysis of Belief Estimator Accuracy and Failure Modes

Belief estimators trained with supervised context-event pairs generalize well in Cleaner and more synthetic settings (MF and CKBG: ~90% classification accuracy), but exhibit decreased performance in negotiations (CaSiNo: 74.4%), likely due to increased belief complexity and ambiguity in dialogue.

Case studies in adversarial and alignment regimes reveal that BEDA’s agents dynamically adapt utterances to exploit shifts in estimated belief, leveraging ambiguity or common ground as required by the scenario. By contrast, LLM agents without belief-act decoupling frequently hallucinate, fail to properly eliminate candidate hypotheses, or redundantly loop through knowledge states—leading to lower practical efficacy and efficiency.

Implications, Limitations, and Future Directions

Implications: The explicit casting of belief estimation as action-space constraints enables the separation of reasoning over hidden states from response realization, increasing transparency and precision in agent behavior. This provides not only a unifying computational account for both competitive and cooperative strategic language but also a flexible scaffold for theory-of-mind modeling within general LLM systems. The methodology is agnostic to the underlying LLM, facilitating plug-and-play augmentation of diverse architectures.

Limitations: BEDA currently operates with pre-defined static world sets; inference-time world set construction and incremental knowledge expansion is an open avenue. Moreover, only coarse dialogue act types are realized—future schemas might decompose acts into multiple compositional and hierarchical categories for higher-fidelity social interaction. While lightweight encoders suffice for discrete event belief estimation, more open-ended or unstructured scenarios may require robust LLM-based belief inference.

Future Work: Directions include dynamic world set maintenance, hierarchical act modeling, recursive higher-order belief reasoning, and extending constraint-based approaches to multi-party and multimodal scenarios. Unifying with RL-based policy optimization under explicit belief constraints is a promising avenue for scalable coordination and competition.

Conclusion

BEDA advances the computational modeling of strategic dialogue by linking theory-of-mind belief reasoning with formal, probabilistically constrained action execution. Empirical evaluations demonstrate that belief-constrained generation systematically outperforms methods relying solely on black-box LLM reasoning, context prompts, or unconstrained ToM modeling—delivering substantial gains in adversarial, cooperative, and negotiation environments. Future AI systems integrating explicit belief-act interfaces will be better positioned for reliable, interpretable, and strategically robust social interaction.


Reference: "BEDA: Belief Estimation as Probabilistic Constraints for Performing Strategic Dialogue Acts" (2512.24885)

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We found no open problems mentioned in this paper.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 12 tweets with 22 likes about this paper.