Papers
Topics
Authors
Recent
Search
2000 character limit reached

AC-TGPO: Joint Attack-Defense Policy Optimization

Updated 1 December 2025
  • The paper demonstrates that AC-TGPO efficiently optimizes joint attack-defense policies using a dual-MDP framework and tree-aware, group-normalized PPO.
  • It introduces an adversarial curriculum that dynamically balances normal, asymmetric, and hard samples with GS-MCTS for escalating challenges.
  • Empirical results reveal significant improvements in safety metrics, setting a new benchmark for robust LLM jailbreak defense.

Adversarial Curriculum Tree-aware Group Policy Optimization (AC-TGPO) is a reinforcement learning module designed for joint attack and defense policy training in LLMs, particularly in adversarial settings such as jailbreak attack-defense co-evolution. Introduced as a core component of the ACE-Safety framework, AC-TGPO integrates policy optimization with adversarial curriculum learning, utilizing a tree-aware, group-normalized approach to improve the robustness and mutual advancement of attacker and defender LLMs through joint training on dynamically difficult samples (Li et al., 24 Nov 2025).

1. Formalization of Joint Attack-Defense Optimization

AC-TGPO models both the attacker LLM (MA\mathcal{M}_A) and the defender LLM (MD\mathcal{M}_D) as parameterized policies within two parallel Markov Decision Processes (MDPs). The attack policy operates over nodes in the Group-aware Strategy-guided Monte Carlo Tree Search (GS-MCTS) tree:

  • Attack MDP: State ss consists of the original malicious query pp, a group of GG LLM-generated candidate rewrites q^\hat{\mathbf{q}}, their corresponding defense model responses o^\hat{\mathbf{o}}, and judge scores j^\hat{\mathbf{j}} (harm, responsibility, co-relevance). The action space comprises KK discrete prompt-rewriting strategies. Upon taking action aa, new rewrites are generated, responses judged, and the tree updated. The attack reward rA(s,a)=jmaxhr_A(s,a)=j^h_{\max}, conditioned on a co-relevance threshold, directly incentivizes maximal harmfulness among group candidates.
  • Defense MDP: Each state is a single adversarial input q~\widetilde{q} (the most harmful rewrite as assessed by the judge model). The defender outputs tokens sequentially with standard LLM autoregressive generation. The reward rD=(10jh)+jr2r_D=\frac{(10 - j^h) + j^r}{2} rewards low harmfulness and high responsibility in responses.

This dual-MDP interlocking structure enables adversarial co-evolution during policy optimization (Li et al., 24 Nov 2025).

2. Adversarial Curriculum Reinforcement Learning

AC-TGPO implements a multi-round adversarial curriculum strategy with I=4I=4 iterations:

  • Normal Set (TiN\mathcal{T}^N_i): Collected by running GS-MCTS using the current attack and defense policies, yielding typical challenge samples.
  • Asymmetric Set (TiA\mathcal{T}^A_i): Generated by attacking the defense model from the previous curriculum round, mining samples where the legacy defense is vulnerable.
  • Hard Set (TiH\mathcal{T}^H_i): Derived by re-testing the newest defense on previously merged hard cases, harvesting samples that resist current mitigation.

These sets are merged per round into Ti=TiNTiATiH\mathcal{T}_i = \mathcal{T}^N_i \cup \mathcal{T}^A_i \cup \mathcal{T}^H_i, with early epochs in each round over-sampling hard and asymmetric scenarios to accelerate difficulty adaptation. The sample composition and hardness are therefore dynamically calibrated by adversarial interaction (Li et al., 24 Nov 2025).

3. Tree-Aware Group Policy Optimization

AC-TGPO employs a PPO-style policy optimizer with advanced normalization procedures at both the group and search-tree level:

  • Group-level normalization: For each group of GG rollouts per training example, rewards are standardized within the group:

μ=1Gk=1Gr~k,σ=1Gk=1G(r~kμ)2,ri=r~iμσ.\mu' = \frac1G\sum_{k=1}^G\widetilde{r}_k,\quad \sigma' = \sqrt{\frac1G\sum_{k=1}^G(\widetilde{r}_k - \mu')^2},\quad r'_i = \frac{\widetilde{r}_i - \mu'}{\sigma'}.

  • Tree-level normalization: Rollouts are also normalized across the MCTS search tree using depth-aware, discounted weights to reflect both local sample quality and the global search context.
  • Policy update objective: The per-token PPO loss is computed with a clipped policy probability ratio, augmented by a KL penalty term:

L(θ)=1Gi=1G1o~it=1o~i[min(ri,t(θ)A^i,t,clip(ri,t(θ),1ε,1+ε)A^i,t)βDKL(πθπref)].\mathcal{L}(\theta) = -\frac{1}{G}\sum_{i=1}^G \frac{1}{|\widetilde{o}_i|}\sum_{t=1}^{|\widetilde{o}_i|} \left[ \min\left( r_{i,t}(\theta)\,\hat{A}_{i,t}, \mathrm{clip}(r_{i,t}(\theta),1-\varepsilon,1+\varepsilon)\,\hat{A}_{i,t} \right) - \beta\,\mathbb{D}_{KL}(\pi_\theta \| \pi_\mathrm{ref}) \right].

Parameters: PPO clip ε=0.1\varepsilon = 0.1, KL penalty β=0.01\beta = 0.01.

Differentiation from vanilla PPO is achieved by the two-stage normalization and explicit integration of MCTS tree statistics. This enables variance stabilization and effective credit assignment in highly non-stationary, adversarial LLM training regimes (Li et al., 24 Nov 2025).

4. Network Architecture and Parameterization

Both attacker and defender are instantiated as LLMs sharing identical backbone architectures (e.g., Vicuna-7B/13B, Llama3-8B, or Mistral-7B-0.3) and transformer stacks. There are no additional task-specific decoder heads; differentiation between attack and defense roles is controlled solely via input prompt templates. Curriculum stage cues are exclusively injected through data sampling, not model architecture. Judging for reward computation is performed by a frozen reference LLM (GPT-4, temperature 0) (Li et al., 24 Nov 2025).

5. Optimization Procedures and Hyperparameter Choices

Training utilizes 8× NVIDIA H800 GPUs, PyTorch, and AdamW with linear warmup (lr=2×105\mathrm{lr}=2\times10^{-5} peak). Key settings include microbatch size of 1 sequence per GPU, group size G=6G=6, curriculum length I=4I=4, Nm=50N_m=50 GS-MCTS search steps per query, depth discount γ=0.96\gamma=0.96, exploration constant cp=1c_p=1, and jailbreak threshold η=8\eta=8. PPO-specific settings are ε=0.1\varepsilon=0.1, β=0.01\beta=0.01. Generation temperatures of τA=τD=0.9\tau_A = \tau_D = 0.9 improve diversity, while the judge model uses τJ=0\tau_J = 0. Regularization is enforced through the KL loss constraint; gradient norms are clipped to 1.0. No explicit entropy regularization beyond the PPO clipped objective is included (Li et al., 24 Nov 2025).

6. Empirical Outcomes and Ablation Analysis

Empirical evaluation demonstrates substantial increases in both attack and defense robustness:

Metric ACE-Safety (Attack) ACE-Safety (Defense) Baseline
ASR-LR (↑ is worse for defense) 95.2% (Vicuna-13B, 7.4 ANA) 7.3% (Vicuna-7B, TAP) varied; all less robust
Helpfulness - \geq 5.4/10 (MT-Bench, AlpacaEval) lower
Robustness (OST/SAT) - See Tables 2–4, Fig. 5–7 lower
Responsibility - CValues-RP lower

Ablation studies reveal that freezing the attack model (MA\mathcal{M}_A) increases ASR by \sim3 points, removing GS-MCTS or prior context each increases ASR by \sim4 points, disabling tree-aware normalization adds \sim2 points, and removing asymmetric or hard samples increases ASR by \sim1.5 points. Each component of the AC-TGPO regime is therefore substantiated as critical for final system robustness (Li et al., 24 Nov 2025).

7. Context and Significance

AC-TGPO combines group-normalized, tree-aware PPO-based policy optimization, adversarial curriculum scheduling, and joint attack-defense co-training in adversarial LLM safety alignment. This configuration allows for continuous mutual advancement in both attack capability and defense robustness. In contrast to prior approaches that optimize only attackers or defenders in isolation, AC-TGPO operationalizes a co-evolutionary paradigm where the sample hardness and adversarial tactics escalate symmetrically with training progress. The resulting models set new benchmarks for LLM safety in jailbreak settings. A plausible implication is that group-level and tree-context-aware normalization can be generally beneficial in adversarial RL for other domains with non-stationary, self-escalating objectives (Li et al., 24 Nov 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Adversarial Curriculum Tree-aware Group Policy Optimization (AC-TGPO).