Papers
Topics
Authors
Recent
Search
2000 character limit reached

Point-JEPA: 3D Self-Supervised Framework

Updated 15 January 2026
  • Point-JEPA is a self-supervised learning framework for 3D point clouds that extracts context-aware representations via joint embedding predictive architectures.
  • It employs a permutation-invariant PointNet tokenizer and a spatial sequencer to order patches, enabling efficient block-masked prediction without reconstruction.
  • The framework achieves state-of-the-art accuracy on 3D benchmarks and reduces training time and label dependence, particularly for robotic grasp prediction tasks.

Point-JEPA is a self-supervised learning framework for 3D point cloud data that leverages joint embedding predictive architectures to learn context-aware representations without reconstructing raw input or relying on contrastive objectives. The core innovation of Point-JEPA lies in efficiently imposing spatial structure on unordered point clouds via a sequencer, enabling multi-block prediction and label-efficient downstream transfer, particularly in grasp joint-angle prediction tasks. The framework has demonstrated state-of-the-art results on standard benchmarks, attaining fast convergence and strong data efficiency, notably in robotic grasping scenarios.

1. Architectural Design

Point-JEPA encodes raw point clouds XRN×3X \in \mathbb{R}^{N \times 3} by first partitioning each object into cc local “patches,” each constructed by selecting cc center points via Farthest Point Sampling and extracting kk nearest neighbors for each as xiRk×3x_i \in \mathbb{R}^{k \times 3} with normalization xixipix_i \leftarrow x_i - p_i for center pip_i. Each patch is embedded using a permutation-invariant PointNet-style tokenizer: shared MLP layers, followed by max-pooling across points in the patch to yield intermediate uiu_i, and a second MLP and max pool to obtain patch token ziRDz_i \in \mathbb{R}^D. This representation is shared across context and target predictions and incorporates learned positional encodings (Saito et al., 2024, Guzelkabaagac et al., 13 Sep 2025).

To address the lack of canonical ordering in point clouds, Point-JEPA defines a sequencer that greedily orders patch indices such that adjacent tokens are spatially close. The procedure is as follows:

1
2
3
4
5
6
7
8
9
10
11
visited = set()
seq = []
current = argmin_i(sum_coords(p_i))
seq.append(current)
visited.add(current)
while len(visited) < c:
    next = argmin_{j not in visited} dist(p_j, p_current)
    seq.append(next)
    visited.add(next)
    current = next
return seq

This sequencer yields a O(c2)O(c^2) cost only once per object and subsequently enables fast block sampling of spatially contiguous context and target tokens for masked prediction (Saito et al., 2024).

2. Predictive Objective and Pipeline

Point-JEPA employs a joint embedding predictive architecture. The context encoder fθf_\theta (a Transformer) processes the full token sequence with target (masked) blocks removed and mask placeholders inserted, and the target encoder fθˉf_{\bar{\theta}} (EMA weights of fθf_\theta) processes the full unmasked sequence. Context embeddings are aggregated (e.g., via cross-patch attention pooling or a Transformer) and mask embeddings and spatial positional encodings are added for missing (target) positions.

The predictor gϕg_\phi (a Transformer or lightweight MLP) takes the context stream's output for masked indices and predicts the associated latent codes for the masked patches. The learning objective eschews contrastive losses (no InfoNCE, no explicit positives/negatives); instead it directly regresses the predicted embedding to the target encoder’s embedding using the Smooth L1 loss (pretraining on general 3D data) (Saito et al., 2024):

L(θ,ϕ)=EcloudsEZc,Zt pathsitargetsSmoothL1(z^i,zˉi),L(\theta, \phi) = \mathbb{E}_{\mathrm{clouds}} \mathbb{E}_{Z_c, Z_t \text{ paths}} \sum_{i \in \mathrm{targets}} \operatorname{SmoothL1}(\hat{z}_i, \bar{z}_i),

where zˉi=fθˉ(xi)\bar{z}_i = f_{\bar{\theta}}(x_i), z^i=gϕ(hθ(Zc),maski,posi)\hat{z}_i = g_\phi(h_\theta(Z_c), \text{mask}_i, \text{pos}_i), and θˉ\bar{\theta} is updated via EMA with momentum τ\tau ramped from 0.9951.00.995 \to 1.0 per training step. This regression happens entirely in the learned embedding space, not the input point space (Saito et al., 2024, Guzelkabaagac et al., 13 Sep 2025). In the grasping transfer setting, the loss adapts to an MSE:

LJEPA=1BiBz~i(c)stopgrad(zi(t))22,\mathcal{L}_\mathrm{JEPA} = \frac{1}{|\mathcal{B}|} \sum_{i \in \mathcal{B}} \|\tilde{z}_i^{(c)} - \text{stopgrad}(z_i^{(t)})\|_2^2,

where B\mathcal{B} indexes the target patches. There is no reconstruction, no contrastive projection head, and layer normalization is applied at output (Guzelkabaagac et al., 13 Sep 2025).

3. Context/Target Sampling and Efficiency

The sequencer allows sampling of spatially contiguous “blocks” for both context and target tokens efficiently by subscripting contiguous windows of the precomputed sequence. For each pretraining instance, MM random contiguous blocks are sampled as targets (typically 15%15\%20%20\% of tokens) and one contiguous block is sampled as context (typically 40%40\%75%75\% of tokens), maintaining spatial locality in both (Saito et al., 2024).

Sampling via the sequencer reduces block selection from O(bc2)O(bc^2) to O(b)O(b) per batch (with bb batch size), as the expensive all-pairs distance need only be computed once per object. On hardware such as RTX A5500, pretraining converges within \sim8 hours on ModelNet40, faster than Point-MAE, Point-M2AE, and related methods (12–15 hours) (Saito et al., 2024).

4. Downstream Applications: Grasp Prediction

Point-JEPA pretrained encoders serve as backbones for downstream supervised tasks such as grasp joint-angle prediction. In this scenario, after pretraining, only the context encoder’s per-patch outputs are retained. These are aggregated (e.g., via attention) into a global embedding hRDh \in \mathbb{R}^D and concatenated with a 7-DOF wrist pose vector pR7p \in \mathbb{R}^7 (Guzelkabaagac et al., 13 Sep 2025).

The downstream head is a lightweight, multi-hypothesis MLP gg that maps [h,p][h, p] to K=5K=5 candidate 12-DOF joint-angle vectors {j^k}k=1K\{\widehat{\mathbf{j}}_k\}_{k=1}^K and logit scores RK\ell \in \mathbb{R}^K. The head is trained with a winner-takes-all/min-over-K objective:

k=argminkj^kj22,k^* = \arg\min_k \|\widehat{\mathbf{j}}_k - \mathbf{j}\|_2^2,

LWTA=j^kj22+α CE(,k),\mathcal{L}_\mathrm{WTA} = \|\widehat{\mathbf{j}}_{k^*} - \mathbf{j}\|_2^2 + \alpha\ \mathrm{CE}(\ell, k^*),

encouraging diversity across hypotheses and coupling them to cross-entropy over the index of the best candidate. Inference uses top-logit selection: the predicted joint is j^k^\widehat{\mathbf{j}}_{\hat{k}} where k^=argmaxkk\hat{k} = \arg\max_k \ell_k (Guzelkabaagac et al., 13 Sep 2025). This approach preserves multimodality and enables practical label efficiency in grasp learning.

5. Empirical Performance

Point-JEPA attains state-of-the-art accuracy on standard 3D recognition and segmentation benchmarks during both frozen linear evaluation and end-to-end fine-tuning:

  • ModelNet40 (linear eval, SVM on pooled encoder outputs): 93.7%±0.293.7\%\pm0.2
  • ModelNet40 (end-to-end, with voting): 94.1%±0.194.1\%\pm0.1
  • ScanObjectNN (OBJ-BG): 92.9%±0.492.9\%\pm0.4
  • Few-shot learning (ModelNet40): 97.4%±2.297.4\%\pm2.2 (5-way 10-shot), 99.2%±0.899.2\%\pm0.8 (5-way 20-shot)
  • ShapeNetPart part segmentation: mIoUC=83.9%_C=83.9\%, mIoUI=85.8%_I=85.8\%

In robot grasp joint prediction, Point-JEPA reduces mean RMSE by 7.7%7.7\%25.9%25.9\% in low-label regimes (1–25\% of labeled objects) and matches fully supervised performance at 100%100\% labeling. Coverage@1515^\circ (fraction with hypothesis within 1515^\circ of ground truth) likewise increases with JEPA pretraining. The selection gap between best-of-K and predicted (top-logit) also narrows, reflecting improved selector alignment (Guzelkabaagac et al., 13 Sep 2025).

Table: Grasp Joint Prediction RMSE (radians), DLR-Hand II

Label Budget Scratch JEPA-pretrained Rel. Gain
1% 0.363 ± 0.002 0.335 ± 0.003 +7.7%
10% 0.335 ± 0.003 0.303 ± 0.009 +9.6%
25% (A+B) 0.332 ± 0.002 0.246 ± 0.012 +25.9%
100% (A+B) 0.235 ± 0.002 0.234 ± 0.008 +0.4%

(Guzelkabaagac et al., 13 Sep 2025)

6. Methodological Significance and Limitations

Point-JEPA demonstrates that context-aware point cloud patch features can be learned efficiently without explicit negative pairs, reconstruction, or auxiliary modalities. This eliminates the computational overhead of contrastive learning and input-space recovery, accelerating pretraining and enhancing label efficiency for geometric downstream tasks.

Limitations identified include reduced local detail recovery versus autoencoder-style methods for dense per-point labeling tasks, dependency on pretraining dataset diversity for sim-to-real transfer, and fixed patching strategies. Potential extensions include geometry-aware/overlapping patching, pretraining on real-sensor data, regularizers for more diverse multi-hypothesis prediction, lightweight fine-tuning (adapters, LoRA), and closed-loop robotic evaluation (Guzelkabaagac et al., 13 Sep 2025).

7. Context and Impact in 3D Self-Supervised Learning

Point-JEPA advances self-supervised learning in the point cloud domain by enabling block-masked prediction with efficient permutation-invariant tokenization and sequence-structured sampling. This approach significantly narrows the label efficiency gap for manipulation planning and object recognition while reducing the compute burden for pretraining and transfer. Its architecture reflects principles from joint embedding predictive models in vision and language, with adaptations tailored to unordered geometric data (Saito et al., 2024, Guzelkabaagac et al., 13 Sep 2025).

A plausible implication is that the paradigm demonstrated by Point-JEPA—predicting latent representations within spatial context blocks—may generalize to other 3D modalities and manipulation tasks, subject to tailored patching and masking strategies for point-wise versus global tasks. Future work on sim-to-real transfer and greater architectural flexibility is underway.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Point-JEPA Framework.