Papers
Topics
Authors
Recent
Search
2000 character limit reached

Motion Transformer Model

Updated 7 February 2026
  • Motion Transformer is a Transformer-based architecture that uses explicit intention queries to predict multimodal, temporally coherent trajectories.
  • It integrates both static and dynamic intention points to ensure scene-compliant, efficient inference, reducing off-road predictions in complex driving scenarios.
  • Advanced variants incorporate multi-agent reasoning and vision-based planning, improving accuracy and physical plausibility in motion prediction tasks.

A Motion Transformer (MTR) is a Transformer-based architecture specialized for learning and predicting structured, temporally coherent motion, with major application impact on autonomous vehicle trajectory forecasting and multi-object tracking. The defining feature of the Motion Transformer family is its use of a compact set of explicit "intention queries"—learned or scenario-compliant points in space—to anchor each output mode in a multimodal predictive distribution. This approach supports stable training, sharp mode diversity, and highly efficient inference relative to dense goal-candidate grids or latent mixture anchors. The MTR paradigm has undergone significant methodological expansion, including scene-compliant intent generation, multi-agent joint reasoning, control-guided regularization, and extensions to vision-based planning and unsupervised image animation.

1. Core Principles and Canonical Architecture

The foundational MTR instantiation (Shi et al., 2022, Shi et al., 2022, Shi et al., 2023) is a Transformer encoder–decoder forecasting pipeline. Inputs include:

  • Agent histories (ego and non-ego, positions and kinematic state over fixed horizons).
  • Map polylines (HD lane centerlines, boundaries in agent-centric coordinates).

Each input is embedded via a PointNet-style encoder followed by local multi-head self-attention layers, yielding context tokens.

The decoder receives a small set of motion intention queries (typically K=64K=64), constructed as vector embeddings of spatial centroid points. These intention points are typically found by k-means clustering ground-truth trajectory endpoints from the training set.

Each decoder layer performs:

  • Self-attention among intention queries: Promotes inter-mode competition and diversity.
  • Cross-attention to context tokens: Each query attends dynamically to relevant agent/map tokens, with dynamic map pooling around current predicted endpoints.

After several refinement layers, each intention query outputs:

  • Per-mode classification score pkp_k (categorical, selecting the best-matching trajectory mode).
  • Gaussian mixture model (GMM) parameters over future time steps (μx,t,μy,t,σx,t,σy,t,ρt)(\mu_{x, t}, \mu_{y, t}, \sigma_{x, t}, \sigma_{y, t}, \rho_t), representing the trajectory mean, covariances, and correlation.

The loss combines:

  • Negative log-likelihood (NLL) under selected GMM component for the trajectory closest to the ground-truth endpoint ("hard assignment").
  • Cross-entropy loss on the query classification logit.
  • Auxiliary dense L1 prediction loss on a parallel single-mode future prediction head.

This design stabilizes multimodal trajectory learning—each mode specializes on a spatially grounded intent, alleviating mode collapse and improving coverage (Shi et al., 2022, Shi et al., 2023).

2. Advances in Intention Querying: Static, Dynamic, and Scene-Compliant Intents

Static Intention Points

Baseline MTR approaches use a globally fixed set of KK intention points obtained via k-means clustering (Shi et al., 2022, Shi et al., 2023). While this is effective for common road geometries, it scatters support for modes across both feasible and infeasible regions in atypical scenes, leading to potential off-road or unrealistic predictions (Sun et al., 2024, Demmler et al., 22 Apr 2025).

Scene-Compliant and Dynamic Intention Points

Subsequent work addresses this by integrating scene-compliant intention points (Sun et al., 2024) and on-the-fly dynamic query generation (Demmler et al., 22 Apr 2025):

  • Scene-compliant intentions: Per-scenario sampled from reachable road centerline waypoints under traffic constraints, using a BFS-based exploration of the lane graph. This restricts MTR's output distribution to legal, feasible areas, greatly reducing off-road false positives (cross-boundary rate drops from 66% to 39%) and enhancing miss rate and mAP (Sun et al., 2024).
  • Dynamic queries: For each inference scene, on-the-fly clustering of reachable map graph nodes produces intention anchors tailored to local topology. These are embedded and injected as decoder queries, yielding further improvements in final displacement error and mAP among legal trajectories (Demmler et al., 22 Apr 2025).

Mixed approaches combine static and dynamic queries to retain coverage for rare off-map or illegal maneuvers, which pure dynamic anchors may fail to represent (Demmler et al., 22 Apr 2025).

3. Extensions: Multi-Agent, Control Guidance, and Planning Integration

Multi-Agent Extensions: MTR++

MTR++ (Shi et al., 2023) generalizes the original single-agent MTR to joint prediction for NN agents in the scene. The context encoder is symmetrized by using polyline-centric local frames for all map/agent tokens and query-centric self-attention in both encoder and decoder. Mutually-guided intention querying enables cross-agent mode interaction. This yields gains in multi-agent mAP and better scene-consistent multi-agent trajectories.

Control-Guided MTR

ControlMTR (Sun et al., 2024) augments the motion transformer pipeline with a parallel control branch that regresses low-level action commands (acceleration and heading rate). These are used to form kinematically-feasible trajectories via discrete-time Euler integration, whose outputs regularize the main GMM trajectory head. Guidance losses link the predicted and control-based trajectories, further restricting the solution manifold to feasible, physically plausible predictions and raising both soft mAP (+5.2%) and drivable area compliance (cross-boundary –41.5%) relative to base MTR.

Vision-based Planning: MTR-VP

MTR-VP (Keskar et al., 27 Nov 2025) adapts the encoder stack to operate on raw panoramic images (by ViT) and past kinematic state, replacing polyline tokens. The decoder is modified to use discrete intention embeddings (from driving commands) or foundation-model features as queries, abolishing learned intent queries.

While ultimate ADE and RFS scores are competitive with prior planning baselines, the data suggest that current vision/kinematics fusion is suboptimal for leveraging scene context. Importantly, retaining a multimodal output head (multiple predicted trajectories) provides a consistent benefit over single-hypothesis trajectories.

4. Architectural and Training Specifics

Variant Encoder Decoder Number of Intention Queries (KK) Key Innovations
MTR (Shi et al., 2022) Polyline (PointNet+local att.) Transformer, static + dynamic queries 64 Dual-query mechanism (global-intent + dynamic refinement)
MTR++ (Shi et al., 2023) Polyline-centric, symmetric Multi-agent query Up to N×64N \times 64 Mutually-guided intention querying
MGTR (Gan et al., 2023) Multi-granular (LiDAR+map) Motion- & traj.-aware cross-attn 64 Multi-granular, LiDAR fusion, dynamic context pooling
ControlMTR (Sun et al., 2024) Polyline, MSG, MCG fusion Kinematic control + GMM heads 64 Scene-compliant intention points, auxiliary control loss
MTR-VP (Keskar et al., 27 Nov 2025) ViT (images), kinematics No learnable queries 20 Cross-attn on visual/intent, foundation-model queries

Key training practices include hard-assignment of output modes based on L2 proximity to ground-truth, non-maximum suppression for top-k mode selection, and auxiliary losses for dense single-mode prediction (Shi et al., 2022, Gan et al., 2023).

5. Benchmarks, Empirical Performance, and Ablations

Motion Transformer models and their descendants consistently achieve state-of-the-art minADE, minFDE, Miss Rate, and mAP on the Waymo Open Dataset (WOMD) (Shi et al., 2022, Shi et al., 2023, Gan et al., 2023, Sun et al., 2024). For example, MTR++ outperforms alternative methods by >>8% mAP in marginal prediction and >>11% in joint prediction. MGTR extends gains with the inclusion of multi-granular LiDAR and map context features (Gan et al., 2023).

Component-wise ablations show:

  • Adding multi-granular map/LiDAR context or motion-aware context search yields incremental mAP gains (Gan et al., 2023).
  • Scene-compliant intention points and control-guided auxiliary heads dramatically suppress invalid predictions and improve miss rate (Sun et al., 2024).
  • Dynamic queries particularly enhance long-term, map-conform forecast accuracy at the cost of reduced illegal maneuver coverage, favoring a mixed anchor scheme in practice (Demmler et al., 22 Apr 2025).
  • Multi-agent symmetric modeling and mutually-guided queries (MTR++) add +2.5% mAP over the single-agent MTR.

In vision-centric planning tasks, multi-modal trajectory heads consistently yield superior performance, but vision/kinematics fusion remains a limiting factor (Keskar et al., 27 Nov 2025).

While the dominant application of MTR is multimodal motion prediction in autonomous driving, the Motion Transformer paradigm is adaptable to other structured motion domains:

  • Multi-object tracking: Motion-Aware Transformer (MATR) (Yang et al., 26 Sep 2025) and its DETR-based progenitors explicitly include motion-prediction modules for query advancement in end-to-end MOT, with substantial improvements in association accuracy.
  • Image animation: Vision-transformer-based MTR architectures (Tao et al., 2022) jointly model patch tokens and motion tokens to infer partwise motion for unsupervised human animation, outperforming CNN-based rivals in both quantitative and user studies.

These extensions demonstrate that the core design pattern of intention/motion queries—bridging explicit spatial goals and iterative trajectory refinement—scales to a diverse array of temporally structured prediction problems.

7. Limitations, Practical Considerations, and Future Directions

Key limitations and open challenges include:

  • Coverage of rare/illegal behaviors: Purely scene-compliant or dynamic intention queries cannot represent illegal or out-of-map actions. Mixed strategies partially address this at the cost of reintroducing spurious anchors (Demmler et al., 22 Apr 2025).
  • Drivable area confinement vs. diversity: There is a trade-off between feasible mode support and coverage for rare edge cases, especially in densely multimodal urban scenes (Sun et al., 2024).
  • Fusion of heterogeneous modalities: Vision-based MTR variants struggle to effectively integrate high-dimensional visual context with kinematics for planning (Keskar et al., 27 Nov 2025).
  • Robustness to map incompleteness: Dynamic intent approaches depend on map fidelity; real-world deployment may demand fallback mechanisms for missing or corrupted map data (Sun et al., 2024).

Prospective advances may incorporate learning-based intention point selection, higher-fidelity vehicle/agent dynamics, differentiable collision-avoidance modules, richer cross-modal fusion (especially for vision), and further optimization of confidence scoring/alignment in highly multimodal benchmarks.


References: (Shi et al., 2022, Shi et al., 2022, Shi et al., 2023, Gan et al., 2023, Sun et al., 2024, Demmler et al., 22 Apr 2025, Keskar et al., 27 Nov 2025, Tao et al., 2022, Yang et al., 26 Sep 2025)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Motion Transformer (MTR) Model.