Papers
Topics
Authors
Recent
Search
2000 character limit reached

VQ-BeT: Vector-Quantized Behavior Transformer

Updated 12 February 2026
  • VQ-BeT is a generative modeling framework that utilizes a hierarchical vector quantization bottleneck to represent complex, multimodal continuous behavior sequences.
  • It overcomes limitations of traditional k-means discretization and iterative denoising by enabling end-to-end differentiable training and single-pass inference.
  • Empirical results across manipulation, locomotion, and autonomous driving show notable performance and efficiency gains over baseline models.

Vector-Quantized Behavior Transformer (VQ-BeT) is a generative modeling framework designed for complex, multi-modal, and continuous behavior sequences in decision-making tasks. It addresses the limitations of prior approaches, such as Behavior Transformers (BeT) relying on k-means discretization and diffusion policies using iterative denoising, by introducing a hierarchical vector quantization bottleneck. This enables compact and expressive latent representations of actions that can be efficiently modeled and predicted by transformer architectures in both conditional and unconditional settings (Lee et al., 2024).

1. Background and Motivation

Modeling complex behavior sequences for imitation and offline policy learning requires capturing multimodal, high-dimensional, and continuous action distributions. Earlier approaches include:

  • Behavior Transformers (BeT): Discretize actions using k-means clustering into KK bins, followed by categorical prediction and a small continuous offset. However, k-means is limited by its fixed binning, inability to scale in high-dimensional or long-horizon action spaces, and lack of end-to-end gradient flow.
  • Diffusion Policies: Model a conditional denoising process across actions, capturing multi-modality via sequential refinement. These methods incur high computational cost and inference latency due to the necessity of O(T)\mathcal{O}(T) denoising steps.

VQ-BeT replaces these discretization and modeling bottlenecks by introducing hierarchical (residual) vector quantization, enabling learned codebooks to capture both coarse and fine action modes. This approach provides end-to-end differentiability during quantizer training, robust multi-modality, and efficient inference through single-pass decoding (Lee et al., 2024).

2. Hierarchical Vector Quantization Module

VQ-BeT employs Residual Vector Quantization (RVQ) to tokenize continuous action chunks at:t+nRndaa_{t:t+n} \in \mathbb{R}^{n \cdot d_a} into a sequence of discrete latent codes suitable for transformer-based modeling.

  • Encoder/Decoder Structure: An encoder ϕ:RndaRD\phi: \mathbb{R}^{n\cdot d_a} \rightarrow \mathbb{R}^D maps action chunks to latent space; a decoder ψ:RDRnda\psi: \mathbb{R}^D \rightarrow \mathbb{R}^{n\cdot d_a} reconstructs actions from codebook outputs.
  • Quantization: NqN_q hierarchical quantization layers with codebooks Ei={ejiRD}j=1KE^i = \{e^i_j \in \mathbb{R}^D\}_{j=1}^K enable residual decomposition:

r0=xt=ϕ(at:t+n),ki=argminjri1eji2,zqi=ekii,ri=ri1zqi.r^0 = x_t = \phi(a_{t:t+n}),\quad k_i = \arg\min_j \|r^{i-1} - e^i_j\|_2,\quad z_q^i = e^i_{k_i},\quad r^i = r^{i-1} - z_q^i.

The final quantized embedding: zq(xt)=i=1Nqzqiz_q(x_t) = \sum_{i=1}^{N_q} z_q^i.

  • Losses:
    • Reconstruction: LRecon=at:t+nψ(zq(ϕ(at:t+n)))1L_{Recon} = \|a_{t:t+n} - \psi(z_q(\phi(a_{t:t+n})))\|_1,
    • VQ loss: Lvq=sg[ϕ(at:t+n)]zq(xt)22L_{vq} = \|\text{sg}[\phi(a_{t:t+n})] - z_q(x_t)\|_2^2,
    • Commitment: Lcommit=βϕ(at:t+n)sg[zq(xt)]22L_{commit} = \beta \|\phi(a_{t:t+n}) - \text{sg}[z_q(x_t)]\|_2^2,
    • Total RVQ-VAE loss: LRVQ=LRecon+Lvq+LcommitL_{RVQ} = L_{Recon} + L_{vq} + L_{commit}.

Codebook vectors are updated using an exponential moving average schedule to maintain stability, following [van den Oord et al. 2017]. This quantization infrastructure supports fine-grained capture of multi-modality through residual coding, with two-layer RVQ achieving 20–50% performance gains over vanilla VQ (Lee et al., 2024).

3. Transformer-Based Sequence Modeling

After RVQ-VAE pretraining, encoder, decoder, and codebooks are frozen. The core sequence model is an autoregressive, GPT-style transformer that predicts coded action sequences:

  • Tokenization:
    • Each chunk yields NqN_q code tokens ki(t)k_i(t).
    • Observation and (optional) goal tokens are embedded via a learned linear layer or CNN (ResNet18 for images).
    • Code tokens are embedded with a learned table WcodeRK×dembW_{code} \in \mathbb{R}^{K \times d_{emb}}.
    • Offset head ζoffset\zeta_{offset} predicts a small continuous correction δat\delta a_t.
  • Architecture:
    • LL transformer blocks apply standard multi-head self-attention and MLP layers, with positional encoding.
    • Output heads ζcodei\zeta_{code}^i provide logits for each code, and ζoffset\zeta_{offset} outputs the continuous offset.
  • Objectives:
    • Code prediction: FocalLoss with γ=2\gamma=2, α=0.25\alpha=0.25 for code classification,
    • Offset reconstruction: Loffset=at:t+n(ψ(iek^i(t)i)+ζoffset(ht))1L_{offset} = \|a_{t:t+n} - (\psi(\sum_i e^i_{k̂_i(t)}) + \zeta_{offset}(h_t))\|_1,
    • Total: L=Lcode+LoffsetL = L_{code} + L_{offset}, with code prediction backprop only through transformer (RVQ-VAE parameters remain frozen).
  • Regularization: Adam optimizer with warmup and cosine decay, epoch window $1000$–$2000$, transformer dropout of $0.1$.

4. Multimodality, Conditioning, and Inference

VQ-BeT supports diverse forms of generative modeling:

  • Multimodality:
    • The hierarchical code structure allows categorical output over KK coarse modes, refined by residual codes.
    • Diversity is further encouraged using top-kk or nucleus sampling during generation.
    • Entropy analysis on task completion validates that VQ-BeT captures behavioral diversity comparably or better than diffusion models.
  • Conditioning and Partial Observability:
    • Goal tokens can be prepended for conditional behavior generation (e.g., trajectory or state targets).
    • Classifier-free guidance is realized by stochastically omitting goal tokens during training, facilitating interpolation at inference.
    • For partial observations (e.g., ego-state plus objects in self-driving), VQ-BeT incorporates only available features.
  • Efficiency:
    • Single-pass transformer decoding contrasts with the multi-iteration denoising of diffusion policies, enabling 5×5\times inference speed in simulated tasks ($15$ ms/timestep vs. $75$–$100$ ms) and 25×25\times on real robot CPUs ($5$ ms vs. $125$ ms).

5. Empirical Evaluation and Performance

VQ-BeT was tested on multiple domains, including manipulation, locomotion, and autonomous driving, as well as both simulated and real-robot settings:

Task Type VQ-BeT Result Best Comparator
PushT (state) 0.78 (final IoU) 0.73 (Diff-pol), 0.39 (BeT)
Ant multimodal (goals) 3.22 3.12 (Diff-T), 2.73 (BeT)
Real-robot (1-phase) 94% success 90% (DiffPol-T)
Real-robot (2-phase) 63% success 37% (DiffPol-T)
Long-horizon (>3 subtasks) 3×3\times higher success -
nuScenes L₂ error (6 s) 0.73 m 0.74 m (GPT-Driver)

VQ-BeT outperforms Behavior Cloning (BC), BeT, and diffusion policy baselines in the majority of tasks. The model achieves higher diversity (entropy of task completion order) without sacrificing accuracy. On the nuScenes driving dataset, it achieves trajectory prediction errors equivalent to leading autoregressive sequence models and exceeds diffusion-based control policies in both L₂ error and collision avoidance (Lee et al., 2024).

6. Architectural Ablations and Design Insights

Several design choices impact VQ-BeT's efficacy:

  • Residual vs. Vanilla VQ: Two-layer RVQ produces 20–50% higher performance than single-layer VQ.
  • Offset Head: Eliminating the offset head increases reconstruction error and reduces long-horizon task success by 30%.
  • Autoregressive Code Prediction: Predicting code tokens in primary→secondary order boosts robustness to real-world noise.
  • Code Weighting: Moderating secondary code losses (β=0.1\beta' = 0.1–$0.6$) stabilizes training and improves performance.

Limitations include the manual tuning of codebook size KK and depth NqN_q, with large codebooks sometimes requiring dead-code masking. RVQ-VAE pretraining is a necessary but separate stage; future work may address this via joint learning (Lee et al., 2024).

7. Extensions, Impact, and Future Work

VQ-BeT presents a unified, efficient paradigm for learning and generating complex, multi-modal continuous behaviors under both full and partial observability. It is positioned for:

  • Scalability: Potential to operate on web-scale datasets and learn transferable or shared representations (e.g., for large fleets of robots or human action data).
  • Joint Learning: Prospects for end-to-end joint training of quantizer and transformer to allow adaptive codebook evolution.
  • Fine-tuning: Integration of VQ-based reinforcement learning (VQ-RL) for offline RL policy improvement atop VQ-BeT priors.

These directions aim to further enhance robustness, efficiency, and generalization in sequential decision-making tasks (Lee et al., 2024).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Vector-Quantized Behavior Transformer (VQ-BeT).