Papers
Topics
Authors
Recent
Search
2000 character limit reached

S-JEPA: towards seamless cross-dataset transfer through dynamic spatial attention

Published 18 Mar 2024 in cs.LG and cs.AI | (2403.11772v2)

Abstract: Motivated by the challenge of seamless cross-dataset transfer in EEG signal processing, this article presents an exploratory study on the use of Joint Embedding Predictive Architectures (JEPAs). In recent years, self-supervised learning has emerged as a promising approach for transfer learning in various domains. However, its application to EEG signals remains largely unexplored. In this article, we introduce Signal-JEPA for representing EEG recordings which includes a novel domain-specific spatial block masking strategy and three novel architectures for downstream classification. The study is conducted on a 54 subjects dataset and the downstream performance of the models is evaluated on three different BCI paradigms: motor imagery, ERP and SSVEP. Our study provides preliminary evidence for the potential of JEPAs in EEG signal encoding. Notably, our results highlight the importance of spatial filtering for accurate downstream classification and reveal an influence of the length of the pre-training examples but not of the mask size on the downstream performance.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (14)
  1. “Revisiting Feature Prediction for Learning Visual Representations from Video”
  2. “Efficient Self-supervised Learning with Contextualized Target Representations for Vision, Speech and Language” arXiv, 2022 DOI: 10.48550/arXiv.2212.07525
  3. “Masked Autoencoders Are Scalable Vision Learners” In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) New Orleans, LA, USA: IEEE, 2022, pp. 15979–15988 DOI: 10.1109/CVPR52688.2022.01553
  4. “Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture” arXiv, 2023 arXiv:2301.08243 [cs, eess]
  5. “Wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations” In Advances in Neural Information Processing Systems 33 Curran Associates, Inc., 2020, pp. 12449–12460
  6. “Evaluación del Impacto del Aprendizaje Auto-Supervisado en la Precisión de Interfaces Cerebro-Ordenador basadas en Imaginación Motora”, 2023
  7. “MAEEG: Masked Auto-encoder for EEG Representation Learning” arXiv, 2022 DOI: 10.48550/arXiv.2211.02625
  8. Demetres Kostas, Stéphane Aroca-Ouellette and Frank Rudzicz “BENDR: Using Transformers and a Contrastive Self-Supervised Learning Task to Learn From Massive Amounts of EEG Data” In Frontiers in Human Neuroscience 15, 2021, pp. 653659 DOI: 10.3389/fnhum.2021.653659
  9. “EEG2Rep: Enhancing Self-supervised EEG Representation Through Informative Masked Inputs” arXiv, 2024 arXiv:2402.17772 [cs, eess]
  10. “Attention Is All You Need” In Advances in Neural Information Processing Systems 30 Curran Associates, Inc., 2017
  11. “Fine-Tuning Can Distort Pretrained Features and Underperform Out-of-Distribution” In ICLR arXiv, 2022 DOI: 10.48550/arXiv.2202.10054
  12. “EEG Dataset and OpenBMI Toolbox for Three BCI Paradigms: An Investigation into BCI Illiteracy” In GigaScience 8.5, 2019, pp. giz002 DOI: 10.1093/gigascience/giz002
  13. “Mother of All BCI Benchmarks”, Zenodo, 2023 DOI: 10.5281/ZENODO.10034223
  14. “The Largest EEG-based BCI Reproducibility Study for Open Science: The MOABB Benchmark”, 2024
Citations (7)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.