Preprint
Article

This version is not peer-reviewed.

EEG-Based Biometric Identification and Emotion Recognition: An Overview

A peer-reviewed article of this preprint also exists.

Submitted:

14 November 2024

Posted:

18 November 2024

Read the latest preprint version here

Abstract
This overview examines recent advancements in EEG-based biometric identification, with a particular focus on the integration of emotional recognition to enhance the robustness and accuracy of biometric systems. By leveraging the unique physiological properties of EEG signals, biometric systems can identify individuals based on neural responses. The overview discusses the influence of emotional states on EEG signals and the consequent impact on biometric reliability. It also evaluates recent emotion recognition techniques, including machine learning methods such as Support Vector Machines (SVM), Convolutional Neural Networks (CNN), and Long Short-Term Memory networks (LSTM). Additionally, the role of multimodal EEG datasets in enhancing emotion recognition accuracy is explored. Findings from key studies are synthesized to highlight the potential of EEG for secure, adaptive biometric systems that account for emotional variability. This overview emphasizes the need for future research on resilient biometric identification that integrates emotional context, aiming to establish EEG as a viable component of advanced biometric technologies.
Keywords: 
;  ;  ;  ;  

1. Introduction

Currently, human identification relies on methods such as passwords, access cards, and PINs, which are vulnerable to theft, loss, or forgetfulness. To address these limitations, biometric systems have been developed to identify individuals based on physical characteristics or physiological signals. Commonly used physical characteristics include fingerprints, iris patterns, and facial features, while physiological signals involve data like voice, EEG, EMG, and ECG, among others [1]. Physiological signals, particularly EEG, have garnered significant interest for biometric applications due to their unique characteristics and inherent robustness against impersonation attacks, as they are not visible to the human eye [2].
EEG signals, generated by the brain’s electrical activity, are commonly used in identifying pathologies such as brain tumors, cerebral dysfunctions, and sleep disorders. Their suitability for biometric identification stems from their universality, uniqueness, permanence, and measurable properties [3]. This makes EEG an attractive candidate for secure identification systems, as it can effectively distinguish between individuals [4].
In [5] a biometric system based on EEG signals was proposed. The researchers formulated a binary optimization problem for channel selection and utilized a Support Vector Machine with a Radial Basis Function kernel (SVM-RBF) using features based on autoregressive coefficients. The proposed method achieved an accuracy of 94.13% using 23 sensors with 5 autoregressive coefficients.
Research has explored various approaches for EEG-based biometric systems. For example, [5] proposed a system using a Support Vector Machine with a Radial Basis Function kernel (SVM-RBF) and achieved 94.13% accuracy. Other studies, such as [6], introduced advanced EEG channel selection methods, obtaining high accuracy with fewer sensors. Deep learning approaches have also been tested, with models such as CNNs, LSTMs, and GRUs achieving accuracies above 96% [7]. Moreover, EEG has been applied in single-channel setups achieving substantial accuracy through signal segmentation and feature extraction techniques [8].
As outlined in the preceding paragraphs, EEG-based biometric systems have shown promising results in identification tasks. Nonetheless, these systems remain an area of active research due to the sensitivity of EEG signals to influences such as emotions, health conditions, and other variables, which introduce variability in the reference signals used for system training. This variability can negatively impact system performance, underscoring the need to address these challenges to improve the robustness and reliability of EEG-based biometric identification. Additional research is necessary to mitigate signal variability and enhance overall system performance. In parallel, numerous studies have explored the use of EEG signals for emotion recognition, expanding the potential applications of EEG in the biometric field [9,10] Kumar and Kumar [11].
In [12], a portable brainwave system was proposed for recognizing positive, negative, and neutral emotions using the DEAP and SEED databases. Among the various methods tested, the Long Short-Term Memory (LSTM) deep learning approach demonstrated the best performance, achieving an impressive accuracy of 94.12% in identifying emotional states. Similarly, [13] presented machine learning models, such as the KNN regressor with Manhattan distance, which utilized features from Alpha, Beta, and Gamma bands, as well as the differential asymmetry of the Alpha band. This approach showed promising results in predicting valence and arousal, achieving an accuracy of 84.4%. These findings underscore the potential of EEG-based models to infer emotional states and deepen the understanding of affective responses. Further studies on EEG-based biometric recognition using deep learning techniques illustrate how convolutional and recurrent neural networks can extract distinctive features from brain signals, achieving high levels of accuracy in biometric identification. These advanced approaches open new pathways for developing more secure and adaptive identification systems that are capable of functioning effectively under challenging conditions [14].
Building on recent advances, research in EEG-based biometric identification and emotion recognition has underscored the importance of multimodal databases and fusion strategies to improve recognition accuracy. A systematic review of studies from 2017 to 2023 identified the DEAP, SEED, DREAMER, and SEED-IV databases as the most widely used, with deep learning models like TNAS, GLFANet, ACTNN, and ECNN-C proving effective in enhancing emotion recognition [15]. Additionally, the MED4 database, integrating EEG signals with photoplethysmography, speech, and facial images, has demonstrated substantial accuracy gains in emotion detection through feature- and decision-level fusion, achieving up to 25.92% improvement over speech and 1.67% over EEG alone in anechoic conditions [16]. Further advancements, such as Binary Particle Swarm Optimization for EEG channel selection, highlight the benefit of focusing on specific brain regions to improve emotion recognition accuracy [17]. The introduction of the M3CV database, which includes multiple subjects, sessions, and tasks, supports the development of robust machine learning algorithms capable of managing intra- and inter-subject variability [18]. These findings highlight the potential of multimodal approaches and the necessity for innovative techniques in EEG-based biometric and emotion recognition, opening promising avenues for future exploration.
Considering this, it is clear that effective and computationally efficient biometric identification techniques are essential, especially as emotional states can significantly impact the accuracy of these systems. Therefore, developing methods that account for variations in EEG signals due to emotions is crucial to enhance the robustness and reliability of EEG-based identification. This article is structured as follows: after the introduction, we present a review of the methods and databases used for EEG-based biometric identification and emotion recognition. The following sections discuss relevant machine learning techniques, multimodal approaches, and strategies to address signal variability. We conclude with a discussion on current challenges and future research directions in the field

2. Literature Review Process

Few studies specifically address biometric identification based on EEG signals in relation to emotional states, aiming to correlate emotions with identification processes. This overview article seeks to provide relevant insights into this research area by synthesizing findings on emotion recognition, biometric identification, and the interrelation between the two. To conduct this review, we utilized the Scopus database, applying targeted search criteria. The keyword field included queries such as “emotion recognition,” “biometric identification,” and “EEG biometric identification.” Article selection was guided by criteria prioritizing innovative methodologies, publicly available databases, studies linking emotions with biometric identification, and experimental articles focused on biometrics and emotions. The collected information was subsequently analyzed and discussed to construct a comprehensive overview of the field.
In Figure 1 is shown the number of publications on Scopus using the terms ’biometric identification and EEG,’ ’emotion recognition and EEG,’ and ’biometric identification and emotion recognition and EEG’ has revealed interesting findings in current research on the integration of EEG signals in biometric identification and emotion recognition. The results indicate that, individually, emotion recognition has been the subject of significantly more studies compared to biometric identification using EEG signals. Additionally, the integration of biometric identification considering emotional states is an emerging field. This pattern suggests a growing convergence between the disciplines of biometric identification and emotion recognition, offering significant opportunities for advancement in the development of more complex and context-aware systems.
Emotion identification has experienced steady growth, peaking in 2023 with 539 studies, followed by biometric identification in 2021 with 34 studies. The integration of both areas has been less explored compared to individual disciplines, with a growing number of studies over the years, reaching its peak in 2021 with 6 studies.
Figure 1. Number of Publications in Scopus
Figure 1. Number of Publications in Scopus
Preprints 139686 g001

3. Electroencephalography (EEG): Foundations and Applications

3.1. Brain Anatomy Relevant to EEG

The human brain is the most complex part of our central nervous system (CNS). According to [19] (Diamond et al., 2014) "almost all organs in the human body are potentially transplantable. However, brain transplantation would be equivalent to transplanting the person." The brain allows self-awareness, speech, and movement. On the other hand, the brain directs the functions of our body. It enables us to store memories, generate thoughts, regulate body movements, and coordinate speech and balance. The human brain is divided into two connected hemispheres, the right and the left, each specializing in different functions. They also have an inverse relationship with the human body, meaning the right hemisphere coordinates the movement of the left side, and vice versa. The right hemisphere dominates memories through images, interpretations, and processes emotions, images, and taste. It is the non-verbal hemisphere responsible for emotional processing. The left hemisphere, on the other hand, dominates symbols, letters, numbers, and words. It is the rational hemisphere [20].
Figure 2. Representation of cerebral hemispheres by [21]
Figure 2. Representation of cerebral hemispheres by [21]
Preprints 139686 g002

3.2. EEG Signals and Their Properties

Brain waves are the electrical impulses generated by chains of neurons, and these signals are distinguished by their speed and frequency. There are 5 types of brain waves called alpha, beta, theta, and delta. Some of them have low frequencies, while others have higher ones. These 5 waves remain active throughout the day, and depending on the activity being performed, some of them tend to be stronger than others [22].
Delta waves (1 to 3 Hz) Those with greater amplitude and associated with deep sleep are the delta waves. These waves are related to activities of which we are not aware, such as the heartbeat. They are also observed during states of meditation. The production of delta rhythm coincides with the regeneration and restoration of the central nervous system [23].
Theta wave (3.5 to 8 Hz) Those associated with imagination and reflection are the theta waves. These waves also appear during deep meditation. Theta waves are of great importance in learning and are produced between wakefulness and sleep when processing unconscious information, such as nightmares or fears [24]. Alpha wave (8 to 13 Hz) The alpha signals appear during states of low brain activity and relaxation. They are waves of greater amplitude compared to beta waves. Generally, alpha waves appear as a reward after a job well done [25].
Beta waves (12 to 33 HZ) Beta waves appear during states when attention is directed towards external cognitive tasks. They have a fast frequency and are associated with intense mental activities [26]. Gamma waves (25 to 100 Hz) Gamma waves originate in the thalamus, and these signals are related to tasks requiring high cognitive processing [23].
Table 1. Feature Extraction from EEG Signals
Table 1. Feature Extraction from EEG Signals
Cite Database Feature Extraction Clasification Method Accuracy Year
[27] Physikalisch-Technische Bundesanstalt (PTB) database and the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) arrhythmia database The time-domain feature extraction method is a semi-fiducial procedure that uses the Pan-Tompkins algorithm to detect the R wave peaks of the QRS complexes, and then selects fixed-width time segments for further dimensionality reduction and feature extraction Only sed the newly proposed Method, but they do not mention the method PTB 98.6% sensitivity. 90.6% sensitivity (MIT-BIH) 2023
[28] They have used two SSVEP datasets for pi, the speller dataset and the EPOC dataset auto-regressive (AR) modeling, power spectral density (PSD) energy of EEG channels, wavelet packet decomposition (WPD), and phase locking values (PLV). combines common spatial patterns with specialized deep-learning neural networks. recognition rate of 99 2023
[29] The paper doesn’t mention the name of the dataset; it only includes citation 32. Upon reviewing the article, it appears that the dataset used is BCI200 The system uses a Random Forest based binary feature selection method to filter out meaningless channels and determine the optimum number of channels for the highest accuracy Hybrid Attention-based LSTM-MLP 99.96% and 99.70% accuracy percentages for eyes-closed and eyes-open datasets 2023
[30] The authors used the dataset of ’Big Data of 2-classes MI’ and Dataset IVa In this study, we used CSP, ERD/S, AR, and FFT to transform segmented data into informative features. The TDP method is excluded from this work because it is suitable for motor execution rather than motor imagination SVM, GNB SVM (CSP (98.97%), ERD/S (98.94%), AR (98.93%), and FFT (97.92%)).GNB (CSP (97.47%), ERD/S (94.58%), FFT (53.80%), and AR (50.24%)). 2023
[31] Dataset I was the main one and con-sisted of a self-collected dataset using a non-expensive EEG device. Dataset II was used to test the proposed method with a large number of subjects. This is a widely used dataset from PhysioNet BCI [41]. EEG signals were processed using the FieldTrip toolbox for Matlab. The toolbox provides various useful tools to process EEG, MEG, and invasive electrophysiological data. EEG signals were processed by first applying a baseline correction relative to the mean voltage, and then a finite impulse response (FIR) bandpass filter from 5 to 40 Hz for noise reduction. These preprocessing steps were necessary to smooth the classification procedures and remove or minimize undesired noise nuisance. Support Vector Machines (SVM), Neural Networks (NN), and Discriminant Analysis (DA). identification accuracy rates of up to 100% with a low-cost EEG device 2023
Table 2. Feature Extraction from EEG Signals
Table 2. Feature Extraction from EEG Signals
Cita Base de datos Extraccion de caracteristicas metodos de clasificacion exactitud año
[32] dataset uses only two EEG channels and a signal measured over a short temporal window of 5 seconds CNN identification result of 99% and an equal error rate of authentication performance of 0.187%. 2023
[33] dataset which can be found on the Kaggle repository (https://www.kaggle.com/datasets/msambare/fer2013). UTKFace dataset The article does not mention de preprocessing KNN, SVM, and deep learning techniques like CNN and VGG-16 with transfer learning SVM model with an F1 score of 0.83 for age detection and 0.46 for facial emotion recognition, and with less computation involved the current VGG model achieved an accuracy of 95.31% in the validation phase 2023
[34] The data is collected from 50 volunteer 1) spectral information, 2) coherence, 3) mutual correlation coefficient, and 4) mutual information. SVM authentication error rate (ERR) was found to be 0.52%, with a classification rate of 99.06%. 2023
[35] WeSAD and the MIT-BIH Arrhythmia databases uses two waveform similarity distances, namely Dynamic Time Warping (DTW) and Time Series Forest (TSF), to provide features for classification. Additionally, the paper proposes a new feature extraction method called the ECG Morphological Feature Extraction (EMFE) method. k-Nearest Neighbors (k-NN), Support Vector Machines (SVM), Random Forest (RF), and Multi-Layer Perceptron (MLP) ..... 2023
[36] DEAP phase locking value (PLV) CNN 85% 2023
[37] SEED and DEAP The proposed model uses an Inventive brain optimization algorithm and frequency features to enhance detection accuracy. optimized deep convolutional neural network (DCNN) K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Random Forest (RF), and Deep Belief Network (DBN). (DCNN) model achieved an accuracy of 97.12% at 90% of training and 96.83% according to K-fold analysis 2023
Table 3. Feature Extraction from EEG Signals
Table 3. Feature Extraction from EEG Signals
Cita Base de datos Extraccion de caracteristicas metodos de clasificacion exactitud año
[38] SEED https://github.com/heibaipei/ DE-CNNv in this link we can find the code of this article The proposed method consists of the following steps: Obtaining the time-frequency content of EEG signals using the modified Stockwell transform. Extracting deep features from each channel’s time-frequency content using a deep convolutional neural network. Fusing the reduced features of all channels to construct the final feature vector. Utilizing semi-supervised dimension reduction to reduce the features. CNNs The Inception-V3 CNN and support vector machine (SVM) classifier ... 2023
[39] DEAP 10-fold cross-validation has been employed for all experiments and scenarios. Sequential floating forward feature (SFFS) selection has been used to select the best features for classification Support Vector Machine (SVM) with Radial Basis Function (RBF) kernel has been applied for classification In our study the CCR is in the range of 88%-99%, whilst the Equal Error Rate (EER) in the aforementioned research is in the range of 15%-35% using SVM 2017
[40] DEAP, MAHNOB-HCI, and SEED The feature extraction process involved the use of time-domain and frequency-domain features classification algorithms used were Support Vector Machines (SVM), Random Forest (RF), and k-Nearest Neighbors (k-NN). ..... 2021
[41] They collected theirs data – 60 users using Emotiv Epoc+ the signals are filtered by Savitzky-Golay filter to attenuate its short term variations Hidden Markov Model (HMM) based temporal classifier and Support Vector Machine (SVM) User identification performance of 97.50% and 93.83% have been recorded with HMM and SVM classifiers, respectively. 2017

3.3. Feature Extraction from EEG Signals

There are different feature extraction techniques, including methods in the time domain, frequency domain, time-frequency domain, spatial domain, and non-linear domain. These different techniques aim to describe a signal by its characteristics. Some of the techniques used in EEG (Electroencephalography) include variance, standard deviation, correlation coefficient, and Hjorth parameters. These methods are computationally less complex. There are also autoregressive (AR) models, fast Fourier transform (FFT), short-time Fourier transform (STFT), spectral power, wavelet transform, Hilbert-Huang transform, common spatial patterns, entropy, among others [42] (Medina et al., 2018).
Methods in the time domain, such as AR, have advantages over techniques like FFT, offering better frequency resolution and improved spectral estimations in short segments of EEG signals. However, they also have limitations. One of these limitations is the lack of clear guidelines for selecting the parameters of spectral estimations. Additionally, AR models require an optimal order, as too low of an order may smooth out the spectrum, while too high of an order may introduce false peaks.
Among the frequency domain models, the FFT has the advantage of allowing mapping from the time domain to the frequency domain, which helps investigate the amplitude distribution of spectra and reflect different brain tasks. However, its limitations include not being suitable for representing non-stationary signals, where the spectral content varies over time.
The Short-time Fourier Transform (SFFT) is simple and easy to implement, but its limitations include longer segments violating the quasi-stationarity assumption required by the Fourier transform.
The Power Spectral Density (PSD) provides information about the energy distribution of the signal across different frequencies. However, it is limited in presenting additional time-scale information, considering that EEG signals possess non-stationary and nonlinear characteristics [43] (Manuel, 2005).
Among the time-frequency techniques, we have wavelets, which are particularly effective in dealing with non-stationary signals. They allow the signal to be decomposed in both time and frequency domains, enabling the simultaneous use of long time intervals for low-frequency information and short time intervals for high-frequency information. However, they require a proper choice of the mother wavelet and an appropriate number of decomposition levels for an accurate analysis of EEG signals.
The Hilbert-Huang Transform (HHT) does not require assumptions about the linearity and stationarity of the signal. It allows for adaptive and multi-scale decomposition of the signal and does not rely on any predefined function for decomposition. However, one of its limitations is that it is defined by an iterative algorithm and lacks a mathematical formula. The final results can be influenced by the way the algorithm is implemented and the definition of variables and control structures [44] (Nacional et al., 2016).
Among the spatial techniques, we have Common Spatial Patterns (CSP), which have the capability to project EEG signals from multiple channels into a subspace where differences between classes are emphasized and similarities are minimized. The alternative method TDCSP optimizes CSP filters and effectively reflects changes in discriminative spatial distribution over time. However, one of its limitations is that it requires not only training samples but also class information to calculate the linear transformation matrix. Additionally, this technique requires a large number of electrodes to be effective [42] (Medina et al., 2018).
Among the nonlinear techniques, entropy is robust in analyzing short data segments, resistant to outliers, capable of dealing with noise through appropriate parameter tuning, and applicable to both stochastic and deterministically chaotic signals. It offers various alternatives to characterize signal complexity with changes over time and quantify dynamic changes of events related to EEG signals. However, one of its limitations is the lack of clear guidelines on how to choose the parameters m (embedding dimension of the series) and r (similarity tolerance) before calculating the approximate or sample entropy. These parameters will affect the entropy of each EEG data record during different mental tasks, thus impacting the classification accuracy.
Lyapunov exponents leverage the chaotic behavior of an EEG signal for classification tasks and, when combined with other linear or nonlinear features, can lead to improved results. However, finding optimal parameters to calculate the Lyapunov exponents requires significant effort to enhance its performance [45](Lara et al., 2003).

4. Emotion Recognition and Biometric Identification Using EEG

Electroencephalography (EEG) has gained increasing attention in the fields of emotion recognition and biometric identification due to its capacity to capture unique, brain-based physiological signals. EEG not only allows for the identification of individuals through distinct neural patterns but also provides insight into emotional states, which can influence biometric measurements. This section explores three key applications of EEG: first, its use in biometric identification by analyzing distinct brainwave patterns; second, the use of EEG for emotion recognition, highlighting how neural responses vary with emotional states; and finally, an integrated approach that combines biometric identification and emotion recognition. Together, these applications illustrate the versatility of EEG in creating robust, adaptive systems that leverage both identity and emotional information for enhanced accuracy and security.

4.1. Biometric from EEG Signals

Biometrics is the science of quantifying physiological or behavioral traits to identify individuals through statistical analysis. This capability is inherently present in humans, enabling us to recognize others by features such as voice tone, body shape, and facial characteristics, among others. Biometric authentication confirms a person’s identity, while biometric identification determines if a person truly is who they claim to be. With nearly 8 billion people in the world, each distinguished by their unique identity, recognition methods fall into three main categories: 1) Knowledge-based identification, which relies on information known only to the person, such as passwords, PINs, or ID numbers; 2) Possession-based identification, using unique objects like ID cards, passports, or badges; and 3) Biometric identification, based on distinctive physical or behavioral traits, including fingerprints, facial features, and voice patterns [46].
The inclusion of electroencephalography (EEG) signals in biometrics represents a significant advancement in the field, leveraging unique brainwave patterns that are challenging to replicate and thus highly secure for applications demanding rigorous authentication.
Biometric identification has gained significant traction due to its difficulty in being counterfeited compared to knowledge-based and possession-based methods, which can be forgotten, duplicated, stolen, or lost. This method offers enhanced security, especially when using unimodal, bimodal, or multimodal systems, which incorporate one, two, or more physiological characteristics, respectively [47].
The evolution of biometric identification with EEG signals has transformed personal authentication by introducing a method that is inherently linked to brain activity. Unlike traditional biometrics, EEG-based systems offer higher resilience against external tampering, as brainwave patterns are generated internally and are unique to each individual.
Conventional architectures for EEG-based biometric identification have been fundamental in the development of accurate and efficient systems. In EEG biometrics, electrodes are strategically positioned using protocols like the 10-20 system, which ensures consistent and replicable data capture. This arrangement is critical, as it allows for precise identification of neural patterns unique to each individual. The most prominent among them include: Artificial Neural Networks (ANN) which have been widely used in emotion identification from EEG signals. These networks can learn complex patterns and extract relevant features from the signals, providing a solid foundation for biometric identification [48]. Support Vector Machines (SVM) have proven effective in classifying complex patterns, enabling emotion identification from features extracted from EEG signals. Their ability to handle high-dimensional datasets makes them a valuable option [49]. Deep Neural Network-Based Architectures as Convolutional Neural Networks (CNN) have gained popularity in EEG signal analysis due to their ability to learn spatial and temporal feature hierarchies. They are particularly useful for identifying complex patterns present in signals related to emotions [50]. LSTM (Long Short-Term Memory) networks, a variant of recurrent neural networks (RNN), have proven effective in modeling temporal sequences in EEG signals. This is crucial for capturing the temporal dynamics of emotions expressed in the signals [51]. EEG signals are preprocessed to enhance data quality by removing noise and irrelevant signals using methods like Independent Component Analysis (ICA) and digital filters. This preprocessing is crucial, as it reduces the potential interference from non-neural activity, thus improving the precision of biometric identification systems.
Following preprocessing, relevant features are extracted from EEG signals, such as specific frequency bands (e.g., Alpha, Beta, Gamma), power spectral density, and entropy measures. These features represent unique neural patterns that can be used to differentiate individuals. Advanced techniques, like the Fast Fourier Transform (FFT) and Wavelet Transform, have been widely implemented for feature extraction due to their ability to isolate frequency-specific characteristics of EEG signals.
Predominant Methodologies The methodology used for biometric identification from EEG signals typically follows a comprehensive approach encompassing the following steps: i) Data Acquisition and Preprocessing: EEG signals are collected using specialized devices, and preprocessing techniques are applied to remove artifacts and normalize the data. In [52] implements Independent Component Analysis (ICA) as a pre-processing technique to remove eyeblink artifacts from the EEG signals. Digital filter: in [53] mentioned use of a digital filter as one of the preprocessing methods. The digital filter is applied to remove artifacts and noise from the measured EEG data. Artifact countermeasure: Another preprocessing method mentioned in the paper is the artifact countermeasure. This method is used to remove artifacts contained in the raw EEG data. Epoch: The paper also mentions the use of epoch as a preprocessing method. Epoching involves dividing the EEG data into smaller segments to analyze specific time intervals for adquisition of EEG signals [54]. Resting state: Pros include ease of use and minimal participant effort, while cons include potential for lack of engagement and limited ability to capture specific cognitive states. Visual stimuli: Pros include the ability to elicit specific responses and capture cognitive processes, while cons include potential for variability in participant responses and limited generalizability. Cognitive activities: Pros include the ability to capture specific mental states and cognitive processes, while cons include potential for participant fatigue and variability in task performance. ii) Feature Extraction: Relevant features are selected and extracted from EEG signals, such as amplitudes, frequencies, and temporal patterns, using advanced signal processing techniques. In [55] mentions that there are many techniques for extracting features from EEG signals, including Eigenvector Methods (EM), different types of Wavelet Transform like Discrete Wavelet Transform, Continuous Wavelet Transform, Time-Frequency Distribution (TFD), and Autoregressive Method (ARM). However, in this particular project, the Fast Fourier Transform (FFT) technique is specifically used for feature extraction. The focus of the paper is on comparing the performance of different feature extraction techniques, and the FFT technique is chosen as the primary method for this study. The results of the study demonstrate the superiority of the proposed model using FFT for feature extraction, achieving an accuracy of 96.81% in classifying EEG signals. iii) Model Training: The selected architecture is trained using labeled datasets to recognize specific patterns associated with different emotions. Validation and Evaluation: The model is validated and evaluated using independent datasets, employing metrics such as accuracy, sensitivity, and specificity to assess its performance. Fine-Tuning and Optimization: Fine-tuning of the model’s architecture and parameters is performed to enhance its predictive capability and generalization. This comprehensive methodology has proven effective in accurately identifying emotions from EEG signals, providing promising results in the field of artificial intelligence applied to biometrics.
The Table 4 presents a summary of various studies on feature extraction and biometric classification techniques for biometric identification using electroencephalogram (EEG) signals. It outlines the preprocessing methods, datasets used, feature extraction and selection techniques, classification methods, and the accuracy achieved in each study. Commonly employed techniques include multivariate variational mode decomposition (MVMD), Fourier-Bessel series expansion, convolutional neural networks (CNN), and functional connectivity (FC) analysis. Classification methods such as K-nearest neighbors (K-NN), support vector machines (SVM), and deep learning (DL) are frequently used, with reported accuracy ranging from 75.8% to 99.9%, depending on the technique and dataset. These studies highlight the increasing effectiveness of feature extraction and classification methods in enhancing the accuracy of biometric identification based on EEG signals, which is crucial for applications in security and biometric systems.
Table 4. Feature Extraction and Classification Techniques for Biometric Identification
Table 4. Feature Extraction and Classification Techniques for Biometric Identification
Cita Preproceso Base de datos Extracción y selección Clasificación biométrica Exactitud
(Kamaraju et al., 2023) Multivariate variational mode decomposition (MVMD) Base de datos propia (35 sujetos) Fourier-Bessel series expansion-based (FBSE) entropies K-NN 93.4 ± 7.0%
(Ortega-Rodríguez et al., 2023) FieldTrip, bandpass 4-40 Hz, beta frequency band 13-30 Hz Base de datos propia (13 personas) y PhysioNet BCI (109 personas) PCA, Wilcoxon test, fast Fourier transform, Power Spectrum (PS), Asymmetry index RBF-SVM, K-fold, Cross-validation 99.9 ± 1.39%
(M. Benomar, Steven Cao, Manoj Vishwanath, Khuong Quoc Vo, 2022) PREP pipeline, notch filter, standardScaler, high pass filter 1 Hz, low pass filter 50 Hz The BED (Biometric EEG Dataset) 21 sujetos PCA, Wilcoxon test, optimal spatial filtering Deep learning (DL) 86.74%
(Tian et al., 2023) - https://link.springer.com/chapter/10.1007/978-981-99-0479-2_294 Functional connectivity (FC) Multi-stream GCN (MSGCN) 98.05%
(Kralikova et al., 2022) Notch filter, Bandpass filter, common average reference (CAR) Base de datos propia (21 sujetos) 1D-CNN Cross 5-fold, LDA, SVM, K-NN, DL 99%
(Wibawa et al., 2022) Finite Impulse Response (FIR), Automatic Artifact Removal EOG (AAR-EOG), Artifact Subspace Reconstruction (ASR), and Independent Component Analysis (ICA) Base de datos propia (43 sujetos) Power Spectral Density (PSD) Naive Bayes, Neural Network, SVM 97.7%
(Hendrawan et al., 2022) ICA, Butterworth filter Base de datos propia (8 sujetos) Power Spectral Density (PSD) from delta (0.5–4 Hz), theta (4–8 Hz), alpha (8–14 Hz), beta (14–30 Hz), gamma (30–50 Hz) bands, LDA K-NN, SVM 80%
(Lai et al., 2022) - PhysioBank database (109) CNN CNN-ECOC-SVM 98.49%
(Jijomon & Vinod, 2018) Matlab edfread, 7.5 second window PhysioNet database (109) Power spectral, PSD, Mean Correlation Coefficient (MCC) Método propuesto por el autor Error rate of 0.016
(Waili et al., 2019) 2nd order Butterworth filter Base de datos propia (6 sujetos) Daubechies (db8) wavelet, PSD Multilayer Perceptron Neural Network (MLPNN) 75.8%
(Jijomon Chettuthara Monsy, 2020) Matlab edfread PhysioNet database (16 sujetos) Frequency-weighted power (FWP) Método propuesto por el autor EER of 0.0039

4.2. Emotion Recognition from EEG Signals

Emotions are responses to events that are accompanied by physiological changes, which predispose us to act. One characteristic of emotions is their high intensity over a short period of time, which distinguishes them from feelings.
Emotion recognition is a field of study nowadays because, in many cases, it is challenging to identify emotions in individuals, as people have developed methods to conceal their emotions. This presents a serious problem, as certain emotions may be associated with illnesses. The identification of emotions has been of great assistance, as its goal is to accurately recognize and identify emotions.
Emotion classification has been mentioned that there are 4 primary emotions, but according to some authors such as Matsumoto and Ekman (2009) and Damasio (2000), there are 6 primary emotions. In a preliminary study on emotions, it was found that the theory of these authors is correct, and the 6 primary emotions are: sadness, surprise, anger, fear, happiness, and contempt [56] (Carla et al., 2017). Secondary emotions are combinations of primary emotions. Emotions can be represented graphically using valence, arousal, and dominance, as shown in Figure 6.
Valence is the primary dimension on which the emotional experience is built. It represents the motivational component of emotion (pleasantness vs. unpleasantness) and originates in separate primary neurobiological structures. One activates the appetitive motivational system, and the other activates the defensive motivational system (Le-Doux, 2000). This primacy of valence and the existence of separate structures have been observed not only in humans but also in primates and other mammals through functional magnetic resonance imaging (fMRI) (Bradley, 2009; Dolin, Zborovskaya, & Zamakhovev, 1965; Lang & Bradley, 2010).
Arousal is the dimension that reflects the energy expended during the emotion; it represents the amount of sympathetic activation experienced during the emotional experience. Research has shown that arousal is often dependent on valence, as the activation of either the appetitive or the defensive motivational systems is accompanied by an increase in arousal (Bradley, 2009; Bradley, Codispoti, Cuthbert, & Lang, 2001).
Dominance is the most recent dimension, referring to the degree of control that a person perceives over their emotional response. Its function is to interrupt or continue the behavioral response. This dimension originates in more recent brain structures and is responsible for inhibition, delay, context evaluation, and planning (Vila et al., 2001) [57] (Carlos Gantiva, 2016).
The interaction between emotions and EEG signals is critical in biometric identification, as emotional states can cause variations in EEG signal patterns, which impacts the stability and reliability of biometric systems based on EEG data. These variations can introduce noise and alter key characteristics in the EEG signals, posing a challenge for achieving consistent accuracy.
The identification of emotions through physiological signals, particularly EEG, is a rapidly advancing field that relies heavily on the use of specialized datasets. These datasets provide the necessary data for training and evaluating machine learning models designed to recognize emotional states from physiological responses. Various databases have been developed to capture the complexity of emotional expression across different modalities, including EEG signals, facial expressions, and peripheral physiological data. Notable datasets include the **DEAP** database, which contains EEG and peripheral signals from 32 participants responding to music videos, and the **MAHNOB** database, which includes EEG recordings and video data from 27 participants exposed to emotional stimuli. The **SEED** database offers EEG data from 15 movie clips shown to 15 participants, while the **LUMED-2** dataset combines EEG, facial expressions, and peripheral data from 13 participants responding to audiovisual stimuli. Each of these datasets provides a unique combination of emotional stimuli, physiological data, and demographic characteristics, making them valuable resources for the development of emotion recognition systems.
DEAP This database contains EEG signals and peripheral physiological signals from 32 participants. Each participant watched 40 musical videos (1-minute excerpts) that were rated for levels of arousal, valence, liking, dominance, and familiarity. Among the 32 participants, 50% were female, aged between 19 and 37, while the male participants had a mean age of 26.9. Peripheral recordings included EOG, 4 EMG signals (from the zygomatic major and trapezius muscles), GSR, BVP (blood volume pressure), temperature, and respiration. Additionally, facial recordings were taken from 22 participants’ frontal faces. The signals were recorded from 32 channels at a sampling rate of 512 Hz [58] (Koelstra et al., 2012).
MAHNOB The database includes 27 participants, consisting of 11 males and 16 females. EEG signal recordings were taken from 32 channels at a sampling rate of 256 Hz. Additionally, videos of the participants’ faces and bodies were recorded using 6 cameras at 60 frames per second (fps), eye gaze data was recorded at 60Hz, and audio was recorded at a sampling rate of 44.1 KHz.
During the recording, 20 videos were presented to the participants, and for each video, emotional keywords, arousal, valence, dominance, and predictability were assessed using a rating scale ranging from 1 to 9 [59] (M. Soleymani, J. Lichtenauer, 2012).
SEED This database consists of EEG signals from 7 men and 8 women, with an average age between 23 and 27 years and a standard deviation of 2.37. For this database, 15 clips from Chinese movies were selected as stimulus material to generate positive, negative, and neutral emotions. Each experiment consists of 15 trials. The EEG signals are captured using a cap following the international 10-20 system for 62 channels. These signals are preprocessed and filtered at 200 Hz. The labels for each signal correspond to (-1 for negative, 0 for neutral, and +1 for positive) emotions [60] (Wei-Long Zheng, n.d.).
LUMED-2 The Loughborough University Multimodal Emotion Database-2 (LUMED-2) is a new dataset of multimodal emotions containing simultaneous multimodal data from 13 participants (6 women and 7 men) while they were presented with audiovisual stimuli. The total duration of all stimuli is 8 minutes and 50 seconds, consisting of short video clips selected from the web to provoke specific emotions. After each session, participants were asked to label the clips with the emotional states they experienced while watching them. Three different emotions resulted from the labeling: "sad," "neutral," and "happy". The participants’ facial expressions were captured using a webcam at a resolution of 640 × 480 and 30 frames per second (fps). EEG data from the participants were captured using an 8-channel wireless EEG device called ENOBIO, with a temporal resolution of 500 Hz. The EEG data were filtered for the frequency range [0, 75 Hz], and baseline subtraction was applied for each window. Regarding peripheral physiological data, an EMPATICA E4 wristband, powered by Bluetooth, was used to record the participants’ GSR (Galvanic Skin Response) [61] (Erhan Ekmekcioglu, 2020).
The datasets described previously provide a solid foundation for research in emotion identification, offering a wide variety of physiological signals and emotional stimuli to train and evaluate classification models. These diverse datasets are essential for advancing emotion recognition systems, as they represent real-world scenarios and a range of emotional expressions. Building on this, the studies summarized in Table 5 further contribute to the field by showcasing different feature extraction and classification techniques applied to EEG signals for emotion identification. By utilizing these datasets, researchers employ various preprocessing methods, feature extraction techniques, and classification models to enhance the accuracy and robustness of emotion recognition systems. These efforts highlight the critical role that both the choice of dataset and the methodological approaches play in the development of reliable emotion detection systems using EEG signals.
The Table 5 presents a collection of studies focused on feature extraction and classification techniques for emotion identification using EEG signals. It provides an overview of the preprocessing methods, feature extraction and selection techniques, and emotion classification methods employed in each study. The preprocessing steps include a variety of filtering techniques such as Laplacian surface filtering, Butterworth filters, and Blind Source Separation, as well as more advanced methods like EEGLAB and Artifact Subspace Reconstruction (ASR). For feature extraction, approaches such as Wavelet Transform, Principal Component Analysis (PCA), Higher Order Spectral Analysis (HOSA), and Entropy-based methods are frequently used. The classification methods vary, with techniques like Linear Discriminant Analysis (LDA), Support Vector Machines (SVM), K-Nearest Neighbor (K-NN), Quadratic Discriminant Analysis (QDA), and Fuzzy Cognitive Maps (FCM) being commonly applied. The reported methods show promising accuracy in identifying emotional states from EEG signals, highlighting the effectiveness of various preprocessing, feature extraction, and classification combinations for emotion recognition.

4.3. Emotion-Aware Biometric Identification

The electroencephalogram (EEG) is a signal that captures information about brain activity. EEG signals are non-stationary, meaning they change over time and are influenced by factors such as human emotions, thoughts, and activities [62]. Emotion-Aware Biometric Identification is an emerging and relatively unexplored area that seeks to enhance biometric identification systems by incorporating emotional states e.g. in [63] ECG signals were applied to biometric identification and emotion identification. While traditional biometric systems focus solely on stable physiological signals, recent studies have investigated how emotions can influence these signals, particularly EEG data, to improve the robustness of identification methods.
Some approaches in this field have utilized datasets that include conditions such as driving fatigue and various emotional states, as well as artificially induced brain responses like rapid serial visual presentation (RSVP). However, while these datasets involve conditions of fatigue and emotions, these factors are not specifically analyzed for their impact on biometric identification. For example, [64] presents a convolutional neural network model, GSLT-CNN, which directly processes raw EEG data without requiring feature engineering. This model was evaluated on a dataset of 157 subjects across four experiments. In contrast, [65] focuses on using olfactory stimuli, such as specific aromas, to evoke emotional responses that affect brainwave patterns, thereby uncovering unique aspects of individual identity.
Other research efforts have leveraged emotion-specific datasets, designed to capture variations in emotional states, for biometric identification purposes. These datasets enable researchers to identify distinct neural responses associated with emotions, providing valuable insights into how emotional context can be integrated into biometric systems [66,67]. Although still in its early stages, emotion-aware biometric identification has the potential to create adaptive, resilient systems that account for the dynamic nature of human emotions.
This area of research has been somewhat controversial. Zhang et al. [68] investigated the use of emotional EEG for identification purposes and found that emotion did not affect identification accuracy when using 12-second EEG segments. However, as mentioned by Wang et al. [66], the robustness of this method across different emotional states was not verified.
The current study opts to use the SEED dataset, a publicly available emotional EEG dataset, to reduce the influence of different content on brain activity by having subjects watch extended video clips. It is believed that only during video viewing can an individual’s underlying characteristics and rhythms be effectively observed.
Table 5. Feature Extraction and Classification Techniques for Emotion Identification
Table 5. Feature Extraction and Classification Techniques for Emotion Identification
Autores Año Preproceso Extracción y selección Clasificación de emociones
Murugappan, Nagarajan Ramachandran 2010 Filtro de superficie Laplaceano Transformada wavelet, Fuzzy C Means (FCM) y Fuzzy K-Means (FKM) Linear Discriminant Analysis (LDA) y K Nearest Neighbor (K-NN)
You-Yun Lee, Shulan Hsich 2014 FFT, EEGLAB Correlación, Coherencia y sincronización de fase Análisis de discriminante cuadrático
Daniela Iacovielloa, Andrea Petraccab 2015 Filtro wavelet PCA SVM
Nitin Kumar, Kaushikee Khaund 2015 Blind source separation, Filtro de paso de banda 4.0-45.0 Hz HOSA (Higher order Spectral Analysis) LS-SVM, Artificial Neural Networks (ANN)
Nitin Kumar, Kaushikee Khaund 2016 Filtro Butterworth Análisis Biespectral con HOSA SVM
G. Mejía, A. Gómez 2016 Filtros Butterworth Transformada wavelet estacionaria Quadratic discriminant analysis (QDA)
Yong Zhang, Xiaomin Ji 2016 Algoritmo basado en Análisis Independiente de Componentes Entropía de muestras, Entropía Cuadrática, Distribución de Entropía SVM
Beatriz García 2016 Algoritmo basado en Análisis de Componentes Independientes Entropía de muestras, Entropía Cuadrática, Distribución de Entropía SVM
Yasar Dasdemir, Esen Yildirim 2017 EEGLAB, MARA, AAR Valor de bloqueo de fase (PLV) con ANOVA para medir significancia SVM
Moon Inder Singh, Mandeep Singh 2017 Filtro de superficie Laplaceano Transformada wavelet SVM Polinomial
Baharch Nakisa, Mohammad Naim Rastgoo 2018 Filtro Butterworth y Notch Algoritmos ACA, SA, GA, SPO SVM
Jia Wen Li, Xiangyu Zeng, Huiming Zhao 2023 DWT, EMD Smoothed pseudo-Wigner–Ville distribution (RSPWVD) K-NN, SVM, LDA y LR
Georgia SOVATZIDI, Dimitris K. IAKOVIDIS 2023 Finite Impulse Response, Artefact Subspace Reconstruction (ASR) Power Spectral Density (PSD) Naïve Bayes (NB), K-NN, SVM, Fuzzy Cognitive Map (FCM)

5. Conclusions and Future Work

EEG-based biometric identification and emotion recognition face multiple challenges that impact their effectiveness and practical application. For biometric identification, a primary difficulty lies in the high variability of EEG features across sessions for the same individual, which complicates consistent identification [69]. Additional challenges include limitations related to the number of channels and temporal windows used, as well as the risk of overfitting in deep learning models [32]. The complex data collection and computational requirements inherent in multi-channel EEG setups further complicate the deployment of these systems [70]. Moreover, identifying robust features from non-stationary EEG signals that are sufficiently discriminatory remains a significant hurdle [71], along with fundamental concerns around privacy, user-friendliness, and authentication standards [72]. Similarly, EEG-based emotion recognition encounters unique obstacles, particularly due to the variability of EEG signals between individuals, which challenges model generalization across unseen subjects [73]. Issues in data processing, generalizability, and the integration of these models into human-computer interaction frameworks present further difficulties [74]. The neural complexity of emotions and individual differences add another layer of complexity to emotion recognition models [75]. Furthermore, the use of single features, redundant signals, and the high number of channels required for effective recognition limit the accuracy and portability of these systems [76,77]. The feature redundancy and computational demands complicate implementation in wearable devices, underscoring the need for efficient, channel-optimized solutions [78]. In this context, Emotion-Aware Biometric Identification emerges as an innovative yet challenging approach, aiming to integrate emotional states into biometric systems to enhance robustness and adaptability. By incorporating emotion recognition, these systems could potentially achieve more accurate and personalized identification, especially in dynamic environments. However, achieving reliable emotion-aware biometric identification requires addressing additional challenges, such as emotional variability across sessions and individual differences in emotional expression. Future research should focus on optimizing feature selection methods to manage both the non-stationary nature of EEG signals and the influence of transient emotional states. Developing lightweight, high-performance models capable of integrating biometric and emotional data could open new avenues for secure, adaptive authentication systems, particularly in applications where user engagement and real-time adaptability are critical.

Author Contributions

C.D. and A.C.: conceptualization, methodology, validation, formal analysis, investigation, resources, writing—original draft preparation, writing—review and editing; L.S. and E.D.: conceptualization, methodology, investigation, supervision; M.B.: methodology, investigation, resources, writing—review and editing, supervision, project administration, funding acquisition

Funding

This work was supported by Institución Universitaria Pascual Bravo.

Acknowledgments

The authors would like to thank the contributions of the research project titled "Framework de fusión de la información orientado a la protección de fraudes usando estrategias adaptativas de autenticación y validación de identidad," supported by Institución Universitaria Pascual Bravo.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zapata, J.C.; Duque, C.M.; Gonzalez, M.E.; Becerra, M.A. Data Fusion Applied to Biometric Identification—A Review. Advances in Computing and Data Sciences 2017, 721, 721–733. [Google Scholar] [CrossRef]
  2. Zhong, W.; An, X.; Di, Y.; Zhang, L.; Ming, D. Review on identity feature extraction methods based on electroencephalogram signals. 2021, 38. [Google Scholar] [CrossRef]
  3. Belhadj, F. Biometric system for identification and authentication. 2017. [Google Scholar]
  4. Moreno-Revelo, M.; Ortega-Adarme, M.; Peluffo-Ordoñez, D.; Alvarez-Uribe, K.; Becerra, M. Comparison among physiological signals for biometric identification. 2017, 10585 LNCS. [Google Scholar] [CrossRef]
  5. Alyasseri, Z.A.A.; Alomari, O.A.; Makhadmeh, S.N.; Mirjalili, S.; Al-Betar, M.A.; Abdullah, S.; Ali, N.S.; Papa, J.P.; Rodrigues, D.; Abasi, A.K. EEG Channel Selection for Person Identification Using Binary Grey Wolf Optimizer. IEEE Access 2022, 10, 10500–10513. [Google Scholar] [CrossRef]
  6. Abdi Alkareem Alyasseri, Z.; Alomari, O.A.; Al-Betar, M.A.; Awadallah, M.A.; Hameed Abdulkareem, K.; Abed Mohammed, M.; Kadry, S.; Rajinikanth, V.; Rho, S. EEG Channel Selection Using Multiobjective Cuckoo Search for Person Identification as Protection System in Healthcare Applications. Computational Intelligence and Neuroscience 2022, 2022, 1–18. [Google Scholar] [CrossRef]
  7. Radwan, S.H.; El-Telbany, M.; Arafa, W.; Ali, R.A. Deep Learning Approaches for Personal Identification Based on EGG Signals. Lecture Notes on Data Engineering and Communications Technologies 2022, 100, 30–39. [Google Scholar] [CrossRef]
  8. Hendrawan, M.A.; Saputra, P.Y.; Rahmad, C. Identification of optimum segment in single channel EEG biometric system. Indonesian Journal of Electrical Engineering and Computer Science 2021, 23, 1847–1854. [Google Scholar] [CrossRef]
  9. Kulkarni, D.; Dixit, V.V. Hybrid classification model for emotion detection using electroencephalogram signal with improved feature set. Biomedical Signal Processing and Control 2025, 100, 106893. [Google Scholar] [CrossRef]
  10. Wang, C.; Li, Y.; Liu, S.; Yang, S. TVRP-based constructing complex network for EEG emotional feature analysis and recognition. Biomedical Signal Processing and Control 2024, 96, 106606. [Google Scholar] [CrossRef]
  11. Kumar, A.; Kumar, A. DEEPHER: Human Emotion Recognition Using an EEG-Based DEEP Learning Network Model. Engineering Proceedings 2021, 10, 32. [Google Scholar] [CrossRef]
  12. Sakalle, A.; Tomar, P.; Bhardwaj, H.; Acharya, D.; Bhardwaj, A. A LSTM based deep learning network for recognizing emotions using wireless brainwave driven system. 2021. [Google Scholar] [CrossRef]
  13. Galvão, F.; Alarcão, S.M.; Fonseca, M.J. Predicting exact valence and arousal values from EEG. Predicting exact valence and arousal values from EEG 2021, 21, 3414. [Google Scholar] [CrossRef]
  14. Maiorana, E. Deep learning for EEG-based biometric recognition. Neurocomputing 2020, 410, 374–386. [Google Scholar] [CrossRef]
  15. Prabowo, D.W.; Nugroho, H.A.; Setiawan, N.A.; Debayle, J. A systematic literature review of emotion recognition using EEG signals. Cognitive Systems Research 2023, 82, 101152. [Google Scholar] [CrossRef]
  16. Wang, Q.; Wang, M.; Yang, Y.; Zhang, X. Multi-modal emotion recognition using EEG and speech signals. Computers in Biology and Medicine 2022, 149, 105907. [Google Scholar] [CrossRef]
  17. Kouka, N.; Fourati, R.; Fdhila, R.; Siarry, P.; Alimi, A.M. EEG channel selection-based binary particle swarm optimization with recurrent convolutional autoencoder for emotion recognition. Biomedical Signal Processing and Control 2023, 84, 104783. [Google Scholar] [CrossRef]
  18. Huang, G.; Hu, Z.; Chen, W.; Zhang, S.; Liang, Z.; Li, L.; Zhang, L.; Zhang, Z. M3CV: A multi-subject, multi-session, and multi-task database for EEG-based biometrics challenge. NeuroImage 2022, 264, 119666. [Google Scholar] [CrossRef]
  19. Diamond, M.C.; Scheibel, A.B.; Elson, L.M. libro de trabajo el Cerebro Humano. 2014. [Google Scholar]
  20. Romeo Urrea, H. El Dominio de los Hemisferios Cerebrales. Ciencia Unemi 2015, 3, 8–15. [Google Scholar] [CrossRef]
  21. Gamma, E. Left Brain Vs. Right Brain: Hemisphere Function, 2023.
  22. Sciotto, E.A.; Niripil, E.B. Salud En La Educación Ondas Cerebrales, Conciencia Y Cognición. Salud en la educación 2018. [Google Scholar]
  23. García Domínguez, A.E. Análisis de ondas cerebrales para determinar emociones a partir de estímulos visuales. 2015, 137. [Google Scholar]
  24. Changoluisa Romero, D.P.; Escalante Viteri, F.J. Diseño e implementación de un sistema de adquisición de ondas cerebrales (EEG) de seis canales y análisis en tiempo, frecuencia y coherencia. 2012. [Google Scholar]
  25. Caballero, P.A.O. Diseño De Mecanismos De Procesamiento Interactivos Para EL Analisis de ondas cerebrales. 2005. [Google Scholar]
  26. Lundqvist, M.; Herman, P.; Warden, M.R.; Brincat, S.L.; Miller, E.K. Gamma and beta bursts during working memory readout suggest roles in its volitional control. Nature Communications 2018, 9, 1–12. [Google Scholar] [CrossRef]
  27. Meltzer, D.; Luengo, D. Efficient Clustering-Based electrocardiographic biometric identification. Expert Systems with Applications 2023, 219, 119609. [Google Scholar] [CrossRef]
  28. Oikonomou, V.P. Human Recognition Using Deep Neural Networks and Spatial Patterns of SSVEP Signals. 2023, 23, 2425. [Google Scholar] [CrossRef]
  29. Balcı, F. DM-EEGID: EEG-Based Biometric Authentication System Using Hybrid Attention-Based LSTM and MLP Algorithm. Traitement Du Signal 2023, 40, 1–14. [Google Scholar] [CrossRef]
  30. Bak, S.J.; Jeong, J. User Biometric Identification Methodology via EEG-Based Motor Imagery Signals. IEEE Access 2023, XX. [Google Scholar] [CrossRef]
  31. Ortega-Rodríguez, J.; Martín-Chinea, K.; Gómez-González, J.F.; Pereda, E. Brainprint based on functional connectivity and asymmetry indices of brain regions: A case study of biometric person identification with non-expensive electroencephalogram headsets. 2023. [Google Scholar] [CrossRef]
  32. Alsumari, W.; Hussain, M.; Alshehri, L.; Aboalsamh, H.A. EEG-Based Person Identification and Authentication Using Deep Convolutional Neural Network. Axioms 2023, 12, 74. [Google Scholar] [CrossRef]
  33. Teja Chavali, S.; Tej Kandavalli, C.; Sugash, T.M.; Subramani, R. Smart Facial Emotion Recognition With Gender and Age Factor Estimation. Procedia Computer Science 2023, 218, 113–123. [Google Scholar] [CrossRef]
  34. TajDini, M.; Sokolov, V.; Kuzminykh, I.; Ghita, B. Brainwave-based authentication using features fusion. Computers and Security 2023, 129, 103198. [Google Scholar] [CrossRef]
  35. Bıçakcı, H.S.; Santopietro, M.; Guest, R. Activity-based electrocardiogram biometric verification using wearable devices. IET Biometrics 2023, 12, 38–51. [Google Scholar] [CrossRef]
  36. Cui, G.; Li, X.; Touyama, H. Emotion recognition based on group phase locking value using convolutional neural network. Scientific Reports 2023, 13, 1–9. [Google Scholar] [CrossRef]
  37. Khubani, J.; Kulkarni, S. Inventive deep convolutional neural network classifier for emotion identification in accordance with EEG signals. Social Network Analysis and Mining 2023, 13. [Google Scholar] [CrossRef]
  38. Zali-Vargahan, B.; Charmin, A.; Kalbkhani, H.; Barghandan, S. Deep time-frequency features and semi-supervised dimension reduction for subject-independent emotion recognition from multi-channel EEG signals. Biomedical Signal Processing and Control 2023, 85, 104806. [Google Scholar] [CrossRef]
  39. Vahid, A.; Arbabi, E. Human identification with EEG signals in different emotional states. In Proceedings of the 2016 23rd Iranian Conference on Biomedical Engineering and 2016 1st International Iranian Conference on Biomedical Engineering, ICBME 2016; 2017; pp. 242–246. [Google Scholar] [CrossRef]
  40. Arnau-González, P.; Arevalillo-Herráez, M.; Katsigiannis, S.; Ramzan, N. On the Influence of Affect in EEG-Based Subject Identification. IEEE Transactions on Affective Computing 2021, 12, 391–401. [Google Scholar] [CrossRef]
  41. Kaur, B.; Singh, D.; Roy, P.P. A Novel framework of EEG-based user identification by analyzing music-listening behavior. Multimedia Tools and Applications 2017, 76, 25581–25602. [Google Scholar] [CrossRef]
  42. Medina, B.; Sierra, J.E.; Ulloa, A.B. Técnicas de extracción de características de señales EEG en la imaginación de movimiento para sistemas BCI. Extraction techniques of EEG signals characteristics in motion imagination for BCI systems. Espacios 2018, 39, 36–48. [Google Scholar]
  43. Manuel. Análisis en el dominio de la frecuencia - Análisis de Fourier 2005. p. 27.
  44. Colominas, M.A. Métodos guiados por los datos para el análisis de señales: contribuciones a la descomposición empírica en modos. 2016. [Google Scholar]
  45. Lara, L.; Stoico, C.; Machado, R.; Castagnino, M. Estimación de los exponentes de lyapunov. 2003, XXII, 1441–1451. [Google Scholar]
  46. Jain, A.; Bolle, R.; Pankanti, S. Introduction to biometrics. International Conference on Pattern Recognition 2008, 1. [Google Scholar] [CrossRef]
  47. Meltzer-Camino, D.; Alarcon-Aquino, V. Recent advances in biometrics and its standardization: a survey. 2018. [Google Scholar]
  48. Rodriguez-Wallberg, K. ECG Biometric Recognition by Convolutional Neural Networks with Transfer Learning Using Random Forest Approach. Smart Innovation, Systems and Technologies (Smart Innovation, Systems and Technologies) 2022, 177–189. [Google Scholar] [CrossRef]
  49. Ibrahim, H. EEG-Based Biometric Close-Set Identification Using CNN-ECOC-SVM. 2021, 723–732. [Google Scholar] [CrossRef]
  50. Lai, C.Q.; Ibrahim, H.; Suandi, S.A.; Abdullah, M.Z. Convolutional Neural Network for Closed-Set Identification from Resting State Electroencephalography. 2022, 10, 3442. [Google Scholar] [CrossRef]
  51. Sun, Y.; Lo, F.P.W.; Lo, B. EEG-based user identification system using 1D-convolutional long short-term memory neural networks. 2019, 125, 259–267. [Google Scholar] [CrossRef]
  52. Kaliraman, B.; Nain, S.; Verma, R.; Thakran, M.; Dhankhar, Y.; Hari, P.B. Pre-processing of EEG signal using Independent Component Analysis. 2022, 1–5. [Google Scholar] [CrossRef]
  53. Yamashita, M.; Nakazawa, M.; Nishikawa, Y.; Abe, N. Examination and It’s Evaluation of Preprocessing Method for Individual Identification in EEG. 2020, 239–246. [Google Scholar] [CrossRef]
  54. Bhawna, K.; Priyanka; Duhan, M. Electroencephalogram Based Biometric System: A Review. Lecture Notes in Electrical Engineering 2021, 668, 57–77. [Google Scholar] [CrossRef]
  55. Acharya, D.; Lende, M.; Lathia, K.; Shirgurkar, S.; Kumar, N.; Madrecha, S.; Bhardwaj, A. Comparative Analysis of Feature Extraction Technique on EEG-Based Dataset. 2020, 405–416. [Google Scholar] [CrossRef]
  56. Carla, F.; Yanina, W.; Daniel Gustavo, P. ¿Cuántas Son Las Emociones Básicas? Anuario de Investigaciones 2017, 26, 253–257. [Google Scholar]
  57. Gantiva, C.; Camacho, K. CARACTERISTICAS DE LA RESPUESTA EMOCIONAL GENERADA POR LAS PALABRAS: UN ESTUDIO EXPERIMENTAL DESDE LA EMOCIÓN Y LA MOTIVACIÓN. 2016, 10, 55–62. [Google Scholar]
  58. Koelstra, S.; Muhl, C.; Soleymani, M.; Lee, J.S.; Yazdani, A.; Ebrahimi, T.; Pun, T.; Nijholt, A.; Patras, I. DEAP: A Database for Emotion Analysis; Using Physiological Signals. 2012. [Google Scholar] [CrossRef]
  59. Soleymani, M.; Lichtenauer, J.; Pun, T.; Pantic, M. A Multimodal Database for Affect Recognition and Implicit Tagging. 2012, 3, 42–55. [Google Scholar] [CrossRef]
  60. Zheng, W.L.; Guo, H.T.; Lu, B.L. Revealing critical channels and frequency bands for emotion recognition from EEG with deep belief network. In Proceedings of the 2015 7th International IEEE/EMBS Conference on Neural Engineering (NER); IEEE, 2015; pp. 154–157. [Google Scholar] [CrossRef]
  61. Ekmekcioglu, E.; Cimtay, Y. Loughborough University Multimodal Emotion Dataset-2. 2020. [Google Scholar]
  62. Ong, Z.Y.; Saidatul, A.; Vijean, V.; Ibrahim, Z. Non Linear Features Analysis between Imaginary and Non-imaginary Tasks for Human EEG-based Biometric Identification. IOP Conference Series: Materials Science and Engineering 2019, 557, 012033. [Google Scholar] [CrossRef]
  63. Brás, S.; Ferreira, J.H.T.; Soares, S.C.; Pinho, A.J. Biometric and Emotion Identification: An ECG Compression Based Method. Frontiers in Psychology 2018, 9. [Google Scholar] [CrossRef]
  64. Chen, J.X.; Mao, Z.J.; Yao, W.X.; Huang, Y.F. EEG-based biometric identification with convolutional neural network. Multimedia Tools and Applications 2020, 79, 10655–10675. [Google Scholar] [CrossRef]
  65. Pandharipande, M.; Chakraborty, R.; Kopparapu, S.K. Modeling of Olfactory Brainwaves for Odour Independent Biometric Identification. In Proceedings of the 2023 31st European Signal Processing Conference (EUSIPCO); IEEE, 2023; pp. 1140–1144. [Google Scholar] [CrossRef]
  66. Wang, Y.; Wu, Q.; Wang, C.; Ruan, Q. DE-CNN: An Improved Identity Recognition Algorithm Based on the Emotional Electroencephalography. Computational and Mathematical Methods in Medicine 2020, 2020, 1–12. [Google Scholar] [CrossRef]
  67. Duque-Mejía, C.; Castro, A.; Duque, E.; Serna-Guarín, L.; Lorente-Leyva, L.L.; Peluffo-Ordóñez, D.; Becerra, M.A. Methodology for biometric identification based on EEG signals in multiple emotional states; [Metodología para la identificación biométrica a partir de señales EEG en múltiples estados emocionales]. RISTI - Revista Iberica de Sistemas e Tecnologias de Informacao 2023, 2023, 281–288. [Google Scholar]
  68. Zhang, D.; Yao, L.; Zhang, X.; Wang, S.; Chen, W.; Boots, R. Cascade and Parallel Convolutional Recurrent Neural Networks on EEG-based Intention Recognition for Brain Computer Interface. 2021. [Google Scholar] [CrossRef]
  69. Benomar, M.; Cao, S.; Vishwanath, M.; Vo, K.; Cao, H. Investigation of EEG-Based Biometric Identification Using State-of-the-Art Neural Architectures on a Real-Time Raspberry Pi-Based System. 2022, 22, 9547. [Google Scholar] [CrossRef]
  70. Hendrawan, M.A.; Rosiani, U.D.; Sumari, A.D. Single Channel Electroencephalogram (EEG) Based Biometric System. Information Technology International Seminar (ITIS) 2022. [Google Scholar] [CrossRef]
  71. Jijomon Chettuthara Monsy, A.P.V. EEG-based biometric identification using frequency-weighted power feature. The Institution of Engineering and Technology 2020, 9, 251–258. [Google Scholar] [CrossRef]
  72. Jalaly Bidgoly, A.; Jalaly Bidgoly, H.; Arezoumand, Z. A survey on methods and challenges in EEG based authentication. Computers and Security 2020, 93, 101788. [Google Scholar] [CrossRef]
  73. Su, J.; Zhu, J.; Song, T.; Chang, H. Subject-Independent EEG Emotion Recognition Based on Genetically Optimized Projection Dictionary Pair Learning. 2023, 13, 977. [Google Scholar] [CrossRef]
  74. Yuvaraj, R.; Baranwal, A.; Prince, A.A.; Murugappan, M.; Mohammed, J.S. Emotion Recognition from Spatio-Temporal Representation of EEG Signals via 3D-CNN with Ensemble Learning Techniques. Emerging Trends of Biomedical Signal Processing in Intelligent Emotion Recognition 2023, 13, 685. [Google Scholar] [CrossRef] [PubMed]
  75. Si, X.; Huang, D.; Sun, Y.; Huang, S.; Huang, H.; Ming, D. Transformer-based ensemble deep learning model for EEG-based emotion recognition. Brain science advances 2023, 9. [Google Scholar] [CrossRef]
  76. Qu, Z.; Zheng, X. EEG Emotion Recognition Based on Temporal and Spatial Features of Sensitive signals. Journal of Electrical and Computer Engineering 2022, 2022, 5130184:1–5130184:8. [Google Scholar] [CrossRef]
  77. Ji, Y.; Dong, S.Y. Deep learning-based self-induced emotion recognition using EEG. Frontiers in neuroscience 2022, 16. [Google Scholar] [CrossRef] [PubMed]
  78. Deng, X.; Lv, X.; Yang, P.; Liu, K.; Sun, K. Emotion Recognition Method Based on EEG in Few Channels. Data Driven Control and Learning Systems (DDCLS) 2022, 1291–1296. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated