Papers
Topics
Authors
Recent
Search
2000 character limit reached

Quality-Diversity Self-Play (QDSP) Overview

Updated 4 February 2026
  • QDSP is a self-play paradigm that drives agents to discover a broad range of high-quality and diverse strategies.
  • It employs embedding-based novelty detection and lexicographic archive updates to prevent mode collapse in strategy evolution.
  • Risk-sensitive QDSP integrates heterogeneous risk preferences to balance exploration and robust exploitation in multi-agent environments.

Quality-Diversity Self-Play (QDSP) is a self-play paradigm in which agents are systematically driven to discover a broad set of diverse, high-quality strategies rather than converging solely to high-performing but homogeneous solutions. QDSP integrates objectives to maximize both quality and diversity, employing either embedding-based novelty detection or a spectrum of agent risk-sensitive preferences. A central feature is the maintenance of an archive populated by distinct, high-performing agents, updated using lexicographic or risk-tailored rules. Core instantiations utilize either foundation models as search operators capable of code generation and innovation in policy space (Dharna et al., 9 Jul 2025), or introduce heterogeneity through population-level risk-sensitive optimization (Jiang et al., 2023).

1. Core Principles and Definitions

QDSP is designed to overcome the mode collapse endemic to conventional self-play (SP) and population-based self-play (PP), which tend to produce a narrow band of strategies that become entrenched in local optima. The central goal is to produce an archive of agent policies that are both high-quality—typically operationalized as high win-rate or robust expected return—and diverse, meaning they are novel or span a wide region of strategy space.

Two mechanistic traditions have crystallized within QDSP:

  • MAP-Elites–Style Quality-Diversity: Lexicographic or multi-objective schemes store agents in an archive, judged by both their quality (Q) and their strategic novelty, as measured in an embedding space constructed from their policies (Dharna et al., 9 Jul 2025).
  • Risk-Sensitive Self-Play: Introducing diversity by assigning agents heterogeneous risk parameters, allowing the optimization objective to interpolate between worst-case and best-case behavior, thus enforcing diversity without needing hand-crafted behavior metrics (Jiang et al., 2023).

2. Lexicographic MAP-Elites QDSP with Foundation Models

In the MAP-Elites–style variant, candidate policies (denoted π\pi) are characterized by a scalar performance metric Q(π)Q(\pi) (e.g., empirical win-rate) and an embedding ϕ(π)Rn\phi(\pi)\in\mathbb{R}^n derived from the policy’s code via a fixed text-embedding model. The diversity of a policy is quantified as its Euclidean distance to its nearest neighbor in embedding space:

D(π)=minπArchiveϕ(π)ϕ(π)2.D(\pi) = \min_{\pi'\in\text{Archive}} \|\phi(\pi) - \phi(\pi')\|_2.

Archive updates follow a strict order:

  • New policies with sufficient novelty (D(π)>τD(\pi) > \tau, where τ\tau is a threshold) are always added to the archive.
  • Otherwise, the nearest neighbor π\pi^* is located and replaced only if the new candidate exhibits higher quality (Q(π)>Q(π)Q(\pi) > Q(\pi^*)); if not, the candidate is discarded.

This mechanism maintains one high-performing representative per local region ("niche") of policy space, where niches are implicitly defined by the embedding metric rather than predefined descriptors.

The Quality-Diversity Score (QD-score) summarizes archive performance:

$\mathrm{QD\mbox{-}Score} = \frac{1}{|\text{Cells}|}\sum_{\text{cell }c} Q(\pi_c)$

with cells defined by discretization (e.g., 2D PCA bins) of the embedding space.

3. Algorithmic Workflow and Foundation Model Integration

A standard QDSP loop for two-player games (generalizable to nn-player):

  1. For each role rr, maintain Archiver_r seeded with starter policies.
  2. In each iteration:
    • Sample a policy πr\pi_r and an opponent πr\pi_{-r} from respective archives.
    • Evaluate their head-to-head performance.
    • Gather code and performance context, including kk nearest neighbors.
    • Invoke a foundation model (FM) to generate a new candidate policy, conditioned on whether diversification or improvement is desired.
    • Embed the new policy and update the archive using lexicographic rules outlined above.

Foundation models such as GPT-4o-mini or Claude Sonnet 3.5 act as intelligent search operators and are prompted with full policy code, recent outcomes, and niche exemplars. Generated code undergoes testing before evaluation in-environment (Dharna et al., 9 Jul 2025).

4. Risk-Sensitive Population-Based QDSP

An alternative instantiation frames diversity as a property of agent risk preferences. Here, each agent is associated with a risk level parameter τ(0,1)\tau\in(0,1) that controls the interpolation between minimizing worst-case and maximizing best-case performance. The expectile Bellman operator is central:

TτπV(s):=V(s)+2αEaπ(s),sP[τ[δ]++(1τ)[δ]]T^\pi_\tau V(s) := V(s) + 2\alpha \mathbb{E}_{a\sim\pi(\cdot|s),s'\sim P} \left[ \tau[\delta]_+ + (1-\tau)[\delta]_- \right]

where δ(s,a,s)\delta(s,a,s') is the TD error, [x]+=max(x,0)[x]_+ = \max(x,0), [x]=min(x,0)[x]_- = \min(x,0), and α=1/(2max{τ,1τ})\alpha = 1/(2\max\{\tau,1-\tau\}).

Agents optimize risk-sensitive variants of the PPO surrogate objective using expectile-based advantage estimates, with population heterogeneity maintained through randomization and population-based training (PBT):

  • Periodically, underperforming agents adopt parameters and risk levels of stronger agents with added perturbation, promoting exploration of risk-preference space.
  • Empirical evidence indicates that population-level diversity in τ\tau is crucial for robustness and breadth of discovered strategies, particularly in environments exhibiting non-transitive cycles (Jiang et al., 2023).

5. Archive Structure, Novelty Metrics, and Hyperparameters

In MAP-Elites–style QDSP, each Archiver_r is a set of code-based policies, embedded via a frozen text model (dimension n=64n=64, e.g., OpenAI text-embedding-3-small). Euclidean nearest-neighbor distances define novelty; the novelty threshold τ\tau is dynamically determined, typically as the 10th percentile of observed distances.

Key architectural and evaluation parameters include:

  • Archive sizes: \sim250–300 policies per role.
  • Nearest-neighbors (k=3k=3) for context selection.
  • LLM code generation loop employs context windows and explicit prompts for both diversification and performance improvement.
  • Evaluation in benchmark domains involves round-robin ELO scoring, bin-coverage QD-score computation, and scenario-driven win rates (Dharna et al., 9 Jul 2025).

6. Empirical Performance and Case Studies

Foundation-Model QDSP Benchmarks

  • Car Tag (continuous control): QDSP discovered both RL- and heuristic-based policies, yielding highest QD-scores and covering 60–70% of embedding bins. It produced champion agents matching or exceeding strong human-designed baselines.
  • Gandalf (AI safety): QDSP enabled attackers to break all but the final level of progressively sophisticated LLM defenses. Re-running evolution on defenders enabled rapid patching of discovered vulnerabilities. QDSP exhibited the highest QD-score, with distinctive, creative attack strategies emerging (“ReverseMappingAttacker,” “TeachingStyleAttacker,” etc.).

Comparative results show that QDSP’s combination of exploration and refinement outperformed both purely exploitative (vFMSP) and purely explorative (NSSP) approaches, as well as open-loop FM generators. The method effectively automates the discovery and adaptation cycle without handcrafted niches (Dharna et al., 9 Jul 2025).

Risk-Preference QDSP Experiments

  • Slimevolley: Risk-averse agents play cautiously, risk-seeking agents attempt aggressive spikes. Population-level QDSP via RPPO and PBT produced five distinct behavioral clusters as measured by t-SNE, outperforming both standard SP and leading diversity-bolstering baselines.
  • SumoAnts: Population-level heterogeneity fostered both defensive (risk-averse) and offensive (risk-seeking) strategies, with RPBT consistently achieving superior win-rates and robustness (Jiang et al., 2023).

7. Significance and Theoretical Considerations

QDSP represents a unifying framework for open-ended self-play that overcomes traditional exploration-exploitation dilemmas. The lexicographic, “dimensionless” MAP-Elites variant obviates the need for explicit behavioral descriptors, broadening applicability to diverse policy representations. Risk-sensitive QDSP provides a principled, sampling-free axis for diversity by leveraging expectile operators to span the full performance-risk spectrum.

The structural insight is that QDSP, through explicit archive management and policy generation mechanisms, systematically enlarges the policy manifold explored by agent populations—either via embedding neighborhoods or risk-level stratification—while retaining focus on high-performance behaviors. This suggests new avenues in both automatic curriculum generation and robustness optimization in multi-agent learning environments.

References

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Quality-Diversity Self-Play (QDSP).