Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Transfer Learning and Deep Neural Networks for Robust Intersubject Hand Movement Detection from EEG Signals

Version 1 : Received: 17 August 2024 / Approved: 19 August 2024 / Online: 19 August 2024 (13:16:44 CEST)

How to cite: Kok, C. L.; Ho, C. K.; Aung, T. H.; Koh, Y. Y.; Teo, T. H. Transfer Learning and Deep Neural Networks for Robust Intersubject Hand Movement Detection from EEG Signals. Preprints 2024, 2024081351. https://doi.org/10.20944/preprints202408.1351.v1 Kok, C. L.; Ho, C. K.; Aung, T. H.; Koh, Y. Y.; Teo, T. H. Transfer Learning and Deep Neural Networks for Robust Intersubject Hand Movement Detection from EEG Signals. Preprints 2024, 2024081351. https://doi.org/10.20944/preprints202408.1351.v1

Abstract

In this proposed work, five systems were developed to classify four motor functions—forward hand movement (FW), grasp (GP), release (RL), and reverse hand movement (RV)—from EEG signals. The dataset used is WAY-EEG-GAL, involving participants performing a sequence of hand movements. During preprocessing, bandpass filtering was applied to remove artifacts and focus on the mu and beta frequency bands. The first system, a preliminary study model, explored the overall framework of EEG signal processing and classification. It utilized time-domain features like variance and frequency-domain features like alpha and beta power, with classification performed using a KNN model. Insights from this system informed the development of a baseline system, which uses the same dataset but different feature extraction and classification paradigms. This baseline system combined the common spatial patterns (CSP) method with continuous wavelet transform (CWT) for feature extraction and employed a GoogLeNet classifier with transfer learning. Classification was performed on six unique pairs of events derived from the four motor functions, using both intrasubject and intersubject methods. The baseline system achieved the highest accuracy of 99.73% for the GP-RV pair and the lowest accuracy of 80.87% for the FW-GP pair in intersubject classification. Building on this, three additional systems were developed to perform 4-way classification. Among these, the final model, ML-CSP-OVR, achieved the highest intersubject classification accuracy of 78.08% using all combined data and 76.39% for leave-one-out intersubject classification. This proposed model, which features a novel combination of CSP-OVR, CWT, and GoogLeNet, demonstrates strong potential as a general system for motor imagery (MI) tasks without being subject-dependent.

Keywords

EEG Signal Processing; Motor Imagery; Common Spatial Patterns (CSP); Continuous Wavelet Transform (CWT); GoogLeNet; Transfer Learning; K Nearest Neighbors (KNN); Intrasubject Classification; Intersubject Classification

Subject

Engineering, Electrical and Electronic Engineering

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.