Papers
Topics
Authors
Recent
Search
2000 character limit reached

Subject-Independent Deep Architecture for EEG-based Motor Imagery Classification

Published 27 Jan 2024 in eess.SP and cs.LG | (2402.09438v1)

Abstract: Motor imagery (MI) classification based on electroencephalogram (EEG) is a widely-used technique in non-invasive brain-computer interface (BCI) systems. Since EEG recordings suffer from heterogeneity across subjects and labeled data insufficiency, designing a classifier that performs the MI independently from the subject with limited labeled samples would be desirable. To overcome these limitations, we propose a novel subject-independent semi-supervised deep architecture (SSDA). The proposed SSDA consists of two parts: an unsupervised and a supervised element. The training set contains both labeled and unlabeled data samples from multiple subjects. First, the unsupervised part, known as the columnar spatiotemporal auto-encoder (CST-AE), extracts latent features from all the training samples by maximizing the similarity between the original and reconstructed data. A dimensional scaling approach is employed to reduce the dimensionality of the representations while preserving their discriminability. Second, a supervised part learns a classifier based on the labeled training samples using the latent features acquired in the unsupervised part. Moreover, we employ center loss in the supervised part to minimize the embedding space distance of each point in a class to its center. The model optimizes both parts of the network in an end-to-end fashion. The performance of the proposed SSDA is evaluated on test subjects who were not seen by the model during the training phase. To assess the performance, we use two benchmark EEG-based MI task datasets. The results demonstrate that SSDA outperforms state-of-the-art methods and that a small number of labeled training samples can be sufficient for strong classification performance.

Authors (2)
Definition Search Book Streamline Icon: https://streamlinehq.com
References (55)
  1. “Clinical applications of brain-computer interfaces: current state and future prospects,” IEEE reviews in biomedical engineering, vol. 2, pp. 187–199, 2009.
  2. “A large clinical study on the ability of stroke patients to use an EEG-based motor imagery brain-computer interface,” Clinical EEG and Neuroscience, vol. 42, no. 4, pp. 253–258, 2011.
  3. “Motor imagery classification in brain-machine interface with machine learning algorithms: Classical approach to multi-layer perceptron model,” Biomedical Signal Processing and Control, vol. 71, pp. 103101, 2022.
  4. “A novel bayesian framework for discriminative feature extraction in brain-computer interfaces,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 2, pp. 286–299, 2012.
  5. “An approach to improve the performance of subject-independent BCIs-based on motor imagery allocating subjects by gender,” Biomedical engineering online, vol. 13, no. 1, pp. 1–15, 2014.
  6. “Deep learning for electroencephalogram (EEG) classification tasks: a review,” Journal of neural engineering, vol. 16, no. 3, pp. 031001, 2019.
  7. “EEGnet: a compact convolutional neural network for EEG-based brain–computer interfaces,” Journal of neural engineering, vol. 15, no. 5, pp. 056013, 2018.
  8. “Cascade and parallel convolutional recurrent neural networks on EEG-based intention recognition for brain computer interface,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2018, vol. 32.
  9. “Emotionmeter: A multimodal framework for recognizing human emotions,” IEEE transactions on cybernetics, vol. 49, no. 3, pp. 1110–1122, 2018.
  10. “On the intrinsic dimensionality of image representations,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 3987–3996.
  11. “Understanding center loss based network for image retrieval with few training data,” in Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 2018, pp. 0–0.
  12. “Physiobank, physiotoolkit, and physionet: components of a new research resource for complex physiologic signals,” circulation, vol. 101, no. 23, pp. e215–e220, 2000.
  13. “BCI competition 2008–graz data set a,” Institute for Knowledge Discovery (Laboratory of Brain-Computer Interfaces), Graz University of Technology, vol. 16, pp. 1–6, 2008.
  14. “Electroencephalographic motor imagery brain connectivity analysis for BCI: a review,” Neural computation, vol. 28, no. 6, pp. 999–1041, 2016.
  15. “EEG source imaging enhances the decoding of complex right-hand motor imagery tasks,” IEEE Transactions on Biomedical Engineering, vol. 63, no. 1, pp. 4–14, 2015.
  16. “Regularizing common spatial patterns to improve BCI designs: unified theory and new algorithms,” IEEE Transactions on biomedical Engineering, vol. 58, no. 2, pp. 355–362, 2010.
  17. “Simultaneously optimizing spatial spectral features based on mutual information for EEG classification,” IEEE transactions on biomedical engineering, vol. 62, no. 1, pp. 227–240, 2014.
  18. “Filter bank common spatial pattern (FBCSP) in brain-computer interface,” in 2008 IEEE international joint conference on neural networks (IEEE world congress on computational intelligence). IEEE, 2008, pp. 2390–2397.
  19. “A sliding window common spatial pattern for enhancing motor imagery classification in EEG-BCI,” IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1–9, 2021.
  20. “Sparse support matrix machine,” Pattern Recognition, vol. 76, pp. 715–726, 2018.
  21. “A deep learning approach for motor imagery EEG signal classification,” in 2016 3rd Asia-Pacific World Congress on Computer Science and Engineering (APWC on CSE). IEEE, 2016, pp. 34–39.
  22. “Deep learning with convolutional neural networks for EEG decoding and visualization,” Human brain mapping, vol. 38, no. 11, pp. 5391–5420, 2017.
  23. “Semi-supervised contrastive learning for generalizable motor imagery EEG classification,” in 2021 IEEE 17th International Conference on Wearable and Implantable Body Sensor Networks (BSN). IEEE, 2021, pp. 1–4.
  24. “Single-trial EEG classification of motor imagery using deep convolutional neural networks,” Optik, vol. 130, pp. 11–18, 2017.
  25. “A multi-branch 3d convolutional neural network for EEG-based motor imagery classification,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 27, no. 10, pp. 2164–2177, 2019.
  26. “Learning temporal information for brain-computer interface using convolutional neural networks,” IEEE transactions on neural networks and learning systems, vol. 29, no. 11, pp. 5619–5629, 2018.
  27. “Multi-scale neural network for EEG representation learning in BCI,” IEEE Computational Intelligence Magazine, vol. 16, no. 2, pp. 31–45, 2021.
  28. “Adaptive transfer learning for EEG motor imagery classification with deep convolutional neural network,” Neural Networks, vol. 136, pp. 1–10, 2021.
  29. “FBMSNet: A filter-bank multi-scale convolutional neural network for EEG-based motor imagery decoding,” IEEE Transactions on Biomedical Engineering, 2022.
  30. “Classification of motor imagery EEG signals based on deep autoencoder and convolutional neural network approach,” IEEE Access, vol. 10, pp. 48071–48081, 2022.
  31. “Motor imagery classification via temporal attention cues of graph embedded EEG signals,” IEEE journal of biomedical and health informatics, vol. 24, no. 9, pp. 2570–2579, 2020.
  32. “Subject-independent classification of motor imagery tasks in EEG using multisubject ensemble cnn,” IEEE Access, vol. 10, pp. 81355–81363, 2022.
  33. “Relevance based channel selection in motor imagery brain-computer interface,” Journal of Neural Engineering, 2022.
  34. “Towards real-world BCI: Ccspnet, a compact subject-independent motor imagery framework,” Digital Signal Processing, vol. 133, pp. 103816, 2023.
  35. “Adaptive semi-supervised classification to reduce intersession non-stationarity in multiclass motor imagery-based brain–computer interfaces,” Neurocomputing, vol. 159, pp. 186–196, 2015.
  36. X. J. Zhu, “Semi-supervised learning literature survey,” 2005.
  37. “Semi-supervised generative and discriminative adversarial learning for motor imagery-based brain–computer interface,” Scientific reports, vol. 12, no. 1, pp. 1–14, 2022.
  38. “Deep recurrent semi-supervised EEG representation learning for emotion recognition,” in 2021 9th International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, 2021, pp. 1–8.
  39. “Joint rayleigh coefficient maximization and graph based semi-supervised for the classification of motor imagery EEG,” in 2013 IEEE International Conference on Information and Automation (ICIA). IEEE, 2013, pp. 379–383.
  40. “Improved gmm with parameter initialization for unsupervised adaptation of brain–computer interface,” International Journal for Numerical Methods in Biomedical Engineering, vol. 26, no. 6, pp. 681–691, 2010.
  41. D.-H. Lee et al., “Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks,” in Workshop on challenges in representation learning, ICML, 2013, vol. 3, p. 896.
  42. “Realistic evaluation of deep semi-supervised learning algorithms,” Advances in neural information processing systems, vol. 31, 2018.
  43. “Temporal ensembling for semi-supervised learning,” arXiv preprint arXiv:1610.02242, 2016.
  44. “Extraction and interpretation of deep autoencoder-based temporal features from wearables for forecasting personalized mood, health, and stress,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 4, no. 2, pp. 1–26, 2020.
  45. “A hybrid end-to-end spatio-temporal attention neural network with graph-smooth signals for EEG emotion recognition,” IEEE Transactions on Cognitive and Developmental Systems, 2023.
  46. J. B. Kruskal, “Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis,” Psychometrika, vol. 29, no. 1, pp. 1–27, 1964.
  47. “L2 regularization for learning kernels,” arXiv preprint arXiv:1205.2653, 2012.
  48. “Learning representations from EEG with deep recurrent-convolutional neural networks,” arXiv preprint arXiv:1511.06448, 2015.
  49. “Making sense of spatio-temporal preserving representations for EEG-based human intention recognition,” IEEE transactions on cybernetics, vol. 50, no. 7, pp. 3033–3044, 2019.
  50. “A graph-based hierarchical attention model for movement intention detection from EEG signals,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 27, no. 11, pp. 2247–2253, 2019.
  51. “Graph convolution neural network based end-to-end channel selection and classification for motor imagery brain-computer interfaces,” IEEE transactions on industrial informatics, 2022.
  52. “Subdomain adversarial network for motor imagery EEG classification using graph data,” IEEE Transactions on Emerging Topics in Computational Intelligence, 2023.
  53. “Utilization of temporal autoencoder for semi-supervised intracranial EEG clustering and classification,” Scientific reports, vol. 13, no. 1, pp. 744, 2023.
  54. “Latent factor decoding of multi-channel EEG for emotion recognition through autoencoder-like neural networks,” Frontiers in neuroscience, vol. 14, pp. 87, 2020.
  55. “Learning spatial–spectral–temporal EEG features with recurrent 3d convolutional neural networks for cross-task mental workload assessment,” IEEE Transactions on neural systems and rehabilitation engineering, vol. 27, no. 1, pp. 31–42, 2018.
Citations (2)

Summary

  • The paper presents a novel semi-supervised deep architecture that combines a Columnar Spatiotemporal Auto-Encoder with a supervised classifier using center loss for subject-independent EEG motor imagery classification.
  • It extracts discriminative latent features from both labeled and unlabeled EEG data to address inter-subject variability and overcome the challenge of limited training samples.
  • Experiments on the PhysioNet and BCI Competition IV 2a datasets demonstrate superior accuracy and highlight the model's potential for plug-and-play BCI applications in neurorehabilitation and assistive device control.

Subject-Independent Deep Architecture for EEG-based Motor Imagery Classification

The paper entitled "Subject-Independent Deep Architecture for EEG-based Motor Imagery Classification" presents a novel methodology tailored to enhance motor imagery (MI) classification within non-invasive brain-computer interface (BCI) systems using electroencephalogram (EEG) data. Acknowledging inherent challenges such as heterogeneity across subjects and insufficient labeled data, the study introduces a subject-independent semi-supervised deep architecture (SSDA). This proposal stands out in the landscape of EEG-based MI due to its ability to perform the MI classification task independently of specific subject calibration.

Core Architectural Framework

The SSDA devised in this study comprises two integral components:

  1. Columnar Spatiotemporal Auto-Encoder (CST-AE): Serving as the unsupervised part of the architecture, CST-AE is responsible for extracting latent features from both labeled and unlabeled EEG samples. It achieves this by maximizing the similarity between original inputs and their reconstructions. By implementing a dimensional scaling technique, the model aims to retain the discriminative power of representations while reducing their dimensionality.
  2. Supervised Classifier with Center Loss: Utilizing the latent features acquired from the auto-encoder, the supervised segment of the SSDA trains a classifier leveraging a limited number of labeled samples. A unique aspect of this classifier is its employment of center loss, which minimizes intra-class variability by drawing each data point closer to its class center in the embedding space.

This model is optimized in an end-to-end fashion, enabling cohesive learning between the unsupervised and supervised components.

Evaluation and Performance

To substantiate the efficacy of the proposed SSDA, the researchers conducted experiments on two well-known EEG-based MI datasets: PhysioNet and BCI Competition IV 2a. An emphatic finding of this study is SSDA's superior performance over existing state-of-the-art MI classification methods. It remarkably achieves high classification accuracy with a relatively small number of labeled training samples, highlighting the model's proficiency in scenarios constrained by labeled data.

Implications and Future Directions

The implications of deploying such a subject-independent model are manifold. Practically, it leads to broader applicability of MI systems across diverse user groups without the need for exhaustive per-subject training. This fosters more user-centric BCI applications, particularly in domains like neurorehabilitation and assistive device control, where the objective is often to use BCI systems in a plug-and-play fashion without extensive user-specific recalibration.

Theoretically, the integration of unsupervised feature extraction with supervised classification in a semi-supervised contextual framework offers a promising direction for future research. Such frameworks can be pivotal in not just motor imagery recognition but also in emotion recognition, cognitive state monitoring, and other EEG-based applications.

Future exploration could further refine this model by incorporating advanced techniques like domain adaptation to handle variability further or leveraging generative approaches to augment the labeled dataset. The propensity of deep learning models to learn from limited labeled data while capitalizing on a plethora of unlabeled data is an avenue ripe with potential, promising more nuanced and versatile BCI systems.

In summary, the paper presents a robust and comprehensive solution to longstanding issues in EEG-based MI classification, merging innovative architectural strategies with practical efficacy, thereby paving the way for more generalized and efficient BCI systems.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.