Electroencephalography-Based Inner Speech Classification Using LSTM and Wavelet Scattering Transformation (WST)

Abdulghani, Mokhles M. and Walters, Wilbur L. and Abed, Khalid H. (2024) Electroencephalography-Based Inner Speech Classification Using LSTM and Wavelet Scattering Transformation (WST). In: Contemporary Perspective on Science, Technology and Research Vol. 3. B P International, pp. 29-52. ISBN 978-81-969009-2-2

Full text not available from this repository.

Abstract

In this paper, we propose an imagined speech-based brain wave pattern recognition using deep learning. Multiple features were extracted concurrently from eight-channel Electroencephalography (EEG) signals. Imagined speech, sometimes called inner speech, is an excellent choice for decoding human thinking using the Brain-Computer Interface (BCI) concept. BCI is being developed to progressively allow paralyzed patients to interact directly with their environment. To obtain classifiable EEG data with fewer number of sensors, we placed the EEG sensors on carefully selected spots on the scalp. To decrease the dimensions and complexity of the EEG dataset and to avoid overfitting during the deep learning algorithm, we utilized the wavelet scattering transformation. The study was conducted in the Department of Electrical & Computer Engineering and Computer Science at Jackson State University, USA. A low-cost 8-channel EEG headset was used with MATLAB 2023a to acquire the EEG data. The Long-Short Term Memory Recurrent Neural Network (LSTM-RNN) was used to decode the identified EEG signals into four audio commands: Up, Down, Left, and Right. Wavelet scattering transformation was applied to extract the most stable features by passing the EEG dataset through a series of filtration processes. Filtration has been implemented for each individual command in the EEG datasets. The proposed imagined speech-based brain wave pattern recognition approach achieved a 92.50% overall classification accuracy. This accuracy is promising for designing a trustworthy imagined speech-based Brain-Computer Interface (BCI) future real-time systems. For better evaluation of the classification performance, other metrics were considered, and we obtained 92.74%, 92.50% and 92.62% for precision, recall, and F1-score, respectively. Future work is planned to implement and test an online BCI system using MATLAB/Simulink and G. tec Unicorn Hybrid Black+ headset.

Item Type: Book Section
Subjects: Eprints STM archive > Multidisciplinary
Depositing User: Unnamed user with email admin@eprints.stmarchive
Date Deposited: 03 Jan 2024 05:38
Last Modified: 03 Jan 2024 05:38
URI: http://public.paper4promo.com/id/eprint/1747

Actions (login required)

View Item
View Item