Preprint
Review

This version is not peer-reviewed.

EEG-Based Biometric Identification and Emotion Recognition: An Overview

A peer-reviewed article of this preprint also exists.

Submitted:

01 April 2025

Posted:

03 April 2025

You are already at the latest version

Abstract
This overview examines recent advancements in EEG-based biometric identification, focusing on integrating emotional recognition to enhance the robustness and accuracy of biometric systems. By leveraging the unique physiological properties of EEG signals, biometric systems can identify individuals based on neural responses. The overview discusses the influence of emotional states on EEG signals and the consequent impact on biometric reliability. It also evaluates recent emotion recognition techniques, including machine learning methods such as Support Vector Machines (SVM), Convolutional Neural Networks (CNN), and Long Short-Term Memory networks (LSTM). Additionally, the role of multimodal EEG datasets in enhancing emotion recognition accuracy is explored. Findings from key studies are synthesized to highlight the potential of EEG for secure, adaptive biometric systems that account for emotional variability. This overview emphasizes the need for future research on resilient biometric identification that integrates emotional context, aiming to establish EEG as a viable component of advanced biometric technologies.
Keywords: 
;  ;  ;  ;  

1. Introduction

Human identification systems are usually based on passwords, access cards, and PINs, which are vulnerable to theft, loss, or forgetfulness. These limitations are addressed by developing biometric systems that allow the identification of individuals based on physical characteristics or physiological signals. Commonly used physical characteristics include fingerprints, iris patterns, and facial features, while physiological signals involve data like voice, EEG, EMG, and ECG, among others [1]. Physiological signals, particularly EEG, have garnered significant interest for biometric applications due to their unique characteristics and inherent robustness against impersonation attacks, as they are not visible to the human eye [2].
EEG signals, generated by the brain’s electrical activity, are commonly used in identifying pathologies such as brain tumors, cerebral dysfunctions, and sleep disorders. Their suitability for biometric identification systems stems from their universality, uniqueness, permanence, and measurable properties [3]. These required properties make EEG an attractive candidate for secure identification systems, as it can effectively distinguish between individuals [4]; given these advantages, multiple biometric systems based on EEG have been proposed.
The literature reports multiple studies of biometric identification based on EEG signals such as the one presented in [5] a biometric system based on EEG signals was proposed. The researchers formulated a binary optimization problem for channel selection and used a support vector machine with a radial basis function kernel (SVM-RBF) using features based on autoregressive coefficients. The proposed method achieved an accuracy of 94.13% using 23 sensors with five autoregressive coefficients, which gives an idea of the potential of identification from this type of signal.
This open research field has explored various approaches for EEG-based biometric systems. In [6] proposed a system using a support vector machine with a kernel of radial basis function (SVM-RBF) and achieved a precision of 94.13% considering all EEG channels. While in [6], advanced EEG channel selection methods were introduced, obtaining high accuracy with fewer sensors. Deep learning approaches have also been tested, with models such as CNNs (Convolutional Neural Networks), LSTMs (Long Short-Term Memory), and GRUs (Gated Recurrent Units) achieving accuracies above 96% [7]. In addition, EEG has also been applied in single-channel configurations, demonstrating substantial accuracy through signal segmentation and feature extraction techniques [8].
Biometric systems based on electroencephalographic signals have demonstrated high performance in terms of accuracy. However, these studies have been carried out with databases that have a limited number of individuals,
In [9], a portable system for brainwave analysis, was proposed for recognizing positive, negative, and neutral emotions using the DEAP and SEED databases. Among tested methods, the Long Short-Term Memory (LSTM) deep learning approach demonstrated the best performance, achieving 94.12% accuracy in identifying emotional states. In another study, [10] presented machine learning models, such as the KNN (k- Nearest Neighbor) regressor with Manhattan distance, which utilized features from the Alpha, Beta, and Gamma bands, as well as the differential asymmetry of the Alpha band [11]. This approach showed promising results in predicting valence and arousal, achieving an accuracy of 84.4%. These findings underscore the potential of EEG-based models to reliably infer emotional states and deepen the understanding of affective responses. Further studies on EEG-based biometric recognition using deep learning techniques illustrate how convolutional and recurrent neural networks can extract distinctive features from brain signals, achieving high levels of accuracy in biometric identification. These advanced approaches open new pathways for developing more secure and adaptive identification systems capable of functioning effectively under challenging conditions [12].
Building on recent advances in EEG signal processing, recent research in EEG-based biometric identification and emotion recognition highlights the importance of multimodal databases and fusion strategies to improve recognition accuracy. A systematic review of studies from 2017 to 2024 identified the DEAP (Dataset for Emotion Analysis using Physiological signals), SEED (Shanghái Jiao Tong University Emotion EEG Dataset), DREAMER (Database for Emotion Recognition through EEG and ECG Signals), and SEED-IV databases as the most widely used. Deep learning models such as TNAS (Transferable Neural Architecture Search), GLFANet (Graph-based Learning Feature Attention Network), ACTNN (Attention-based Convolutional Temporal Neural Network), and ECNN-C (Efficient Convolutional Neural Network with Contrastive Learning) have demonstrated effectiveness in emotion recognition [13]. Additionally, the MED4 database, which integrates EEG signals with photoplethysmography, speech, and facial images, has shown significant accuracy gains in emotion detection. These improvements, achieved through feature- and decision-level fusion, include a 25.92% increase over speech and a 1.67% over EEG alone in anechoic conditions [14].
Other studies have focused on optimization techniques for EEG channel selection, such as the binary particle swarm optimization (BPSO) algorithm, which allows decreasing the number of signals to be processed, identifying specific brain regions and therefore reducing noise, feature relevance, and accuracy in emotion recognition with a lower computational cost [15]. In [16], M3CV was generated seeking to expand the generality of recognition models by having multiple subjects, sessions, and tasks, which can be considered one of the most complex tasks, i.e., collecting data for model building and hypothesis validation effectively with the ability to manage intra and intersubject variability. Finally, some studies have analyzed multimodality considering different types of signals, such as the DEAP database, which can be beneficial in accuracy and generality but simultaneously increase costs and decrease its feasibility of use in real applications. Therefore, the need for techniques and methodologies to cover the existing gaps is highlighted.
Considering the above, it is evident that the study of biometric identification techniques is a tangible necessity, with a permanent need for improvement in computational performance and precision as well as the mitigation of vulnerability risks. Regarding biometric identification based on EEG signals, this work aims to reveal the importance and influence of emotions on the f EEG signals and, consequently, their impact on biometric identification. Thus, developing new approaches and broadening the scope of studies that address the effects of emotions on EEG signals is essential to improve the performance of biometric identification systems. This article is structured as follows: after the introduction, the methods and databases used for EEG-based biometric identification and emotion recognition are presented. The following sections discuss biometric identification studies that considering the analysis of emotions. Finally, conclusions and current challenges highlight promising directions for future research.
The conceptual framework underlying this analysis is presented in Figure 1. This framework provides a structured overview of the key elements of EEG-based biometric identification, emotion identification, and biometric identification, considering the effects of emotions. In the first stage of the studies, they start with the collection of the datasets, followed by their preprocessing, followed by the extraction and selection of features under different extraction techniques in the time, frequency, time-frequency, and non-linear and spatial domains. Finally, biometric identification and emotion recognition methods based on machine learning and the limited studies that perform biometric identification considering emotions are highlighted. Additionally, meta-analyses focused on performance metrics (considering database, feature extraction, and classification techniques), such as accuracy, are highlighted, as summarized in Table 1, Table 2, and Table 3. The results reveal the challenges at different stages of EEG signal processing. Finally, the framework outlines future prospects, such as adaptive AI models and multimodal data integration, as strategies to improve the identification generality of biometric systems in emotionally dynamic contexts.

2. Literature Review Process

Few studies specifically address biometric identification based on EEG signals about emotional states. This overview summarizes studies in emotion recognition, biometric identification, and the interrelationship between the two. To conduct this review, the Scopus database was queried, and specific search criteria were applied: “emotion recognition,” “biometric identification,” and “EEG biometric identification and emotions.” Article selection was guided by availability criteria, article categories, and the relevance of studies linking emotions, biometric identification, and experimental articles focused on biometrics and emotions. The articles were analyzed and discussed throughout this document.
Figure 2 shows the number of publications in Scopus under the following queries: ’biometric identification and EEG’, ’emotion recognition and EEG’, and ’biometric identification and emotion recognition and EEG’, which has revealed findings in current research on EEG signal integration for biometric identification and emotion recognition. The results showed that emotion recognition based on EEG signals has been widely studied compared to biometric identification using EEG signals. Furthermore, biometric identification that considers emotional states is an emerging field. The above suggests the need to significantly increase EEG signal-based biometric identification studies efforts to advance the development of more complex and context-aware systems. On the other hand, the Figure 2 shows Emotion identification has experienced steady growth, reaching its peak in 2023 with 539 studies, followed by biometric identification in 2021 with 34 studies. The integration of both areas has been less explored than individual disciplines, with a growing number of studies over the years, reaching its peak in 2021 with six studies.

3. Electroencephalography (EEG): Foundations and Applications

3.1. Brain Anatomy Relevant to EEG

The human brain, the most complex organ in the central nervous system (CNS), is fundamental to functions such as self-awareness, speech, movement, and memory storage. As noted by [29], ’almost all organs in the human body are potentially transplantable. However, brain transplantation would be equivalent to transplanting the person,’ highlighting its unique role in defining individual identity. Structurally, the brain is divided into two interconnected hemispheres (see Figure 3), the right and the left, each specializing in distinct functions and maintaining an inverse relationship with the body: the right hemisphere controls the left side, while the left hemisphere controls the right. The right hemisphere is dominant in non-verbal processing, including emotional regulation, memory through images, and the interpretation of sensory inputs such as taste and visual stimuli. Conversely, the left hemisphere excels in verbal and rational functions, such as processing symbols, letters, numbers, and words [30]. These distinct yet complementary roles underscore the brain’s remarkable complexity and its critical role in EEG signal generation.

3.2. EEG Signals and Their Properties

Brain waves are the electrical impulses generated by chains of neurons, and these signals are distinguished by their speed and frequency. There are 5 types of brain waves alpha, beta, theta, delta, and gamma. Some of them have low frequencies, while others have higher ones. These 5 waves remain active throughout the day, and depending on the activity being performed, some of them tend to be stronger than others [31].
Delta waves (1 to 3 Hz) Those with greater amplitude and associated with deep sleep are the delta waves. These waves are related to activities of which we are not aware, such as the heartbeat. They are also observed during states of meditation. The production of delta rhythm coincides with the regeneration and restoration of the central nervous system [32].
Theta wave (3.5 to 8 Hz) are associated with imagination, and reflection are theta waves. These waves also appear during deep meditation. Theta waves are of great importance in learning and are produced between wakefulness and sleep when processing unconscious information, such as nightmares or fears [33]. Alpha wave (8 to 13 Hz) The alpha signals appear during states of low brain activity and relaxation. They are waves of greater amplitude compared to beta waves. Generally, alpha waves appear as a reward after a well-done job [34].
Beta waves (12 to 33 HZ) Beta waves appear during states when attention is directed towards external cognitive tasks. They have a fast frequency and are associated with intense mental activities [35]. Gamma waves (25 to 100 Hz) Gamma waves originate in the thalamus, and these signals are related to tasks requiring high cognitive processing [36].

3.3. Feature Extraction from EEG Signals

Different feature extraction techniques include methods in the time domain, frequency domain, time-frequency domain, spatial domain, and non-linear domain. These different techniques aim to describe a signal by its characteristics. Some techniques used in EEG (Electroencephalography) include variance, standard deviation, correlation coefficient, and Hjorth parameters. These methods are computationally less complex. There are also autoregressive (AR) models, fast Fourier transform (FFT), short-time Fourier transform (STFT), spectral power, wavelet transform, Hilbert-Huang transform, common spatial patterns, entropy, among others [37].
Table 1, Table 2, and Table 3 summarize different techniques applied to feature extraction from EEG signals for biometric identification and emotion recognition. The presented studies employ various EEG databases, highlighting DEAP and SEED as the most widely used for biometrics and emotion identification, along with self-collected databases with commercial devices such as Emotiv Epoc+. In terms of feature extraction, time- and frequency-domain techniques such as Auto-Regressive (AR), Power Spectral Density (PSD), Fast Fourier Transform (FFT), and Wavelet Packet Decomposition (WPD) are widely used for their ability to capture relevant information from the EEG signal. Spatiotemporal domain methods, such as Common Spatial Patterns (CSP) and Phase Locking Value (PLV), have shown high effectiveness in discriminating between subjects and emotional states. Regarding classification, models based on deep neural networks, Convolutional Neural Networks (DCNN), Long Short-Term Memory (LSTM), and Hybrid Attention-based LSTM-MLP have achieved accuracies higher than 97%. However, it is highlighted that traditional methods such as Support Vector Machines (SVM), Random Forest (RF), Hidden Markov Models, and Gaussian Naïve Bayes (GNB) present superior or approximate performance using feature selection techniques such as Sequential Floating Forward Selection (SFFS) and Random Forest-based binary selection. Hybrid models are also highlighted, achieving 99.96% accuracy and demonstrating artificial intelligence’s potential in optimizing biometric identification and emotion recognition from EEG signals.
Methods in the time domain, such as AR, have advantages over FFT, offering better frequency resolution and improved spectral estimations in short segments of EEG signals. However, they also have limitations. One of these limitations is the lack of clear guidelines for selecting the parameters of spectral estimations. Additionally, AR models require an optimal order, as too low of an order may smooth out the spectrum, while too high of an order may introduce false peaks.
Among the frequency-domain models, the FFT allows mapping from the time domain to the frequency domain, which helps investigate the amplitude distribution of spectra and reflect different brain tasks. However, its limitations include not being suitable for representing nonstationary signals, where the spectral content varies over time.
The Short-Time Fourier Transform (SFFT) is simple and easy to implement, but its limitations include longer segments violating the quasi-stationarity assumption required by the Fourier transform.
The Power Spectral Density (PSD) provides information about the energy distribution of the signal across different frequencies. However, it is limited in presenting additional timescale information, considering that EEG signals possess non-stationary and non-linear characteristics [38].
Wavelets are a time-frequency technique particularly effective in dealing with non-stationary signals. They allow the signal to be decomposed in both time and frequency domains, enabling the simultaneous use of long time intervals for low-frequency information and short time intervals for high-frequency information. However, to analyze EEG signals accurately, they require a proper choice of the mother wavelet and an appropriate number of decomposition levels.
The Hilbert-Huang Transform (HHT) does not require assumptions about the linearity and stationarity of the signal. It allows for adaptive and multi-scale decomposition of the signal and does not rely on any predefined function for decomposition. However, one of its limitations is that an iterative algorithm defines it and lacks a mathematical formula. The final results can be influenced by how the algorithm is implemented and the definition of variables and control structures [39].
Among the spatial techniques, we have Common Spatial Patterns (CSP), which have the capability to project EEG signals from multiple channels into a subspace where differences between classes are emphasized and similarities are minimized. The alternative method TDCSP optimizes CSP filters and effectively reflects changes in discriminative spatial distribution over time. However, one of its limitations is that it requires training samples and class information to calculate the linear transformation matrix. Additionally, this technique requires a large number of electrodes to be effective [40].
Among the nonlinear techniques, entropy is robust in analyzing short data segments, resistant to outliers, capable of dealing with noise through appropriate parameter tuning, and applicable to stochastic and deterministically chaotic signals. It offers various alternatives to characterize signal complexity with changes over time and quantify dynamic changes of events related to EEG signals. However, one of its limitations is the lack of clear guidelines on choosing the parameters m (embedding dimension of the series) and r (similarity tolerance) before calculating the approximate or sample entropy. These parameters will affect the entropy of each EEG data record during different mental tasks, thus impacting the classification accuracy.
Lyapunov exponents leverage the chaotic behavior of an EEG signal for classification tasks and, when combined with other linear or nonlinear features, can lead to improved results. However, finding optimal parameters to calculate the Lyapunov exponents requires significant effort to enhance its performance [41].

4. Emotion Recognition and Biometric Identification Using EEG

Electroencephalography (EEG) has gained increasing attention in emotion recognition and biometric identification due to its ability to capture unique physiological signals that reveal unique characteristics that allow for identifying the emotions and identity of individuals; however, information about emotional states can influence biometric measurements. This section explores three key applications of EEG signals: first, their use in biometric identification by analyzing brain wave patterns; second, the use of EEG signals for emotion recognition, highlighting how neural responses vary with emotional states; and finally, an integrated approach that combines biometric identification and emotion recognition. Together, these applications illustrate the versatility of EEG in creating robust and adaptive systems that leverage identity and emotional information for increased accuracy and security.

4.1. Biometric from EEG Signals

Biometrics is the science of quantifying physiological or behavioral traits to identify individuals through statistical analysis. This capability is inherently present in humans, enabling us to recognize others by features such as voice tone, body shape, and facial characteristics. Biometric authentication confirms a person’s identity, while biometric identification determines whether a person is who they say they are. With nearly 8 billion people in the world, each distinguished by their unique identity, recognition methods fall into three main categories: 1) Knowledge-based identification, which relies on information known only to the person, such as passwords, PINs, or ID numbers; 2) Possession-based identification, using unique objects like ID cards, passports, or badges; and 3) Biometric identification, based on distinctive physical or behavioral traits, including fingerprints, facial features, and voice patterns [42].
Biometric identification has gained significant popularity due to its difficulty being falsified compared to knowledge- and possession-based methods, which can be forgotten, duplicated, stolen, or lost. This method offers greater security, especially when using unimodal, bimodal, or multimodal systems, which incorporate one, two, or more physiological characteristics. The literature reports multiple studies with different types of signals for biometric identification. However, EEG signals have been little explored for biometric identification, so it is considered an open field of research that seeks to take advantage of unique brain wave patterns that are difficult to replicate and can be considered highly secure for applications that require rigorous identification [1]. Besides, unlike trait-based biometrics, EEG-based systems offer more excellent resistance to external manipulation, as brainwave patterns are generated internally and unique to each individual.
Studies reported in the literature have revealed that EEG-based biometric identification has achieved high levels of accuracy. An aspect to consider when using this type of signal is the location of the electrodes using protocol 10-20 of acquisition. This protocol guarantees consistent and replicable data capture, which is essential as it allows for precise identification unique to each individual.
Multiple techniques for biometric identification have been tested, with artificial neural networks (ANN) being the most prominent, which have also been widely used in emotion identification from EEG signals. These networks can learn complex patterns and extract relevant signal features, providing a solid foundation for biometric identification. Support vector machines (SVM) are highly effective for biometric and emotion identification from features extracted from EEG signals. Their ability to handle high-dimensional datasets makes them a valuable option[43]. Deep neural network-based architectures, such as convolutional neural networks (CNN), have been applied for EEG signal analysis due to their capability to extract spatial and temporal features, allowing identify complex patterns present in EEG signals[44]. Similarly, LSTM (long short-term memory) networks, a variant of recurrent neural networks (RNN), are effective in modeling temporal sequences in EEG signals [45].
The predominant methodologies used for biometric identification from EEG signals typically follow a comprehensive approach encompassing the following steps:
i) Data acquisition and preprocessing: EEG signals are collected using specialized devices, and the preprocessing stage is carried out to improve data quality by reducing interference from non-neural activity by removing noise and irrelevant signals, using methods such as digital filters, decomposition techniques such as Wavelet transform, artifact countermeasure, independent component analysis (ICA) among others that allow removing artifacts and normalizing data, with the aim of improving the accuracy of prediction systems for biometric identification [46,47]. Another strategy applied at this stage is Epoch, which divides the EEG data into smaller segments to analyze specific time intervals of the EEG signals [48]. An important aspect to consider in signal acquisition is that it is not as easy to take a fingerprint. However, technology has been advancing, developing more comfortable and portable acquisition equipment, making it relevant to minimize the channels needed for biometric identification. Another important problem to consider in acquisition is visual, auditory and olfactory stimuli, which affect the dynamics of EEG signals and a limited generalization. In addition, cognitive activities must be considered since participant fatigue can also affect the dynamics of EEG signals. Therefore, most studies have been carried out under controlled conditions, which limits the application of EEG signals for biometric identification in a real environment.
ii) Feature Extraction: At this stage, relevant features such as amplitudes, frequencies, and temporal patterns are selected and extracted from EEG signals using advanced signal processing techniques. It is mentioned in [49] that there are many techniques to extract features from EEG signals, including Eigenvector methods (EM), different types of Wavelet transform such as Discrete Wavelet Transform, Continuous Wavelet Transform, Time-Frequency Distribution (TFD), and Autoregressive Method (ARM). However, the authors explicitly applied the Fast Fourier Transform (FFT) technique for feature extraction. They focused on comparing the performance of different feature extraction techniques, and the FFT technique was chosen as the primary method for the study achieving an accuracy of 96.81% in classifying EEG signals.
iii) Predictive model construction: In this stage, the predictive model is trained and validated to identify individuals. This process is carried out using independent datasets and employing metrics to measure performance such as accuracy, sensitivity and specificity. In addition, fine-tuning and optimization are performed in which the model parameters are adjusted to improve predictive capacity and generalization.
Table 4 summarizes several biometric identification studies using electroencephalogram (EEG) signals. It presents the pre-processing methods, the data sets used, the feature extraction and selection techniques, the classification methods, and the accuracy achieved in each study. Commonly used techniques include multivariate variational modal decomposition (MVMD), Fourier-Bessel series expansion, convolutional neural networks (CNN) and functional connectivity (FC) analysis. Classification methods such as nearest neighbors (K-NN), support vector machines (SVM), and deep learning (DL) have shown an accuracy ranging from 75.8% to 99.9%, depending on the technique and data set, with SVM obtaining the best results in most cases, which is because this technique works very well when a not very considerable amount of data is used compared to the amount of data required by a neural network to achieve comparable results. These studies highlight the diversity of feature extraction and classification methods to improve the precision of biometric identification.
On the other hand, EEG signals has been studied in combination with other types of signals or traits, which are known as multimodal approaches. These approaches aim to improve the reliability and accuracy of biometric systems. However, these systems increase their complexity and implementation cost, which limits some special applications. Some reported multimodal combinations are: i)EEG + Face Recognition: This technique enhances security by combining brainwave patterns with facial features. An example is hybrid authentication systems designed for high-security access control. ii) EEG + Electrooculography (EOG): Incorporates eye movement data to improve signal interpretation, particularly for eye-tracking and cognitive load assessment applications [61]. iii) EEG + ECG (Electrocardiography): This technique improves robustness and accuracy in biometric identification by taking advantage of each type of signal. This approach can be advantageous as acquisition hardware evolves [62]. iv) EEG + gait analysis: Explored for continuous authentication, especially in mobile and wearable systems [63]. v) EEG + Speech Recognition: Investigated in cognitive-state authentication, particularly in stress-aware security systems, where speech characteristics combined with EEG responses improve user verification under varying conditions [1,64].
Besides, it is important to highlight promising systems for biometric identification from EEG signals, such as transformer-based models [65] and federated learning [66]. Transformers allow spatial and temporal dependencies in EEG signals to be captured more efficiently than traditional convolutional networks, improving neural pattern classification accuracy. Furthermore, some architectures have shown that self-attention mechanisms can identify critical regions in EEG data, optimizing the extraction of relevant features for biometric identification tasks [67]. On the other hand, federated learning (FL) can preserve privacy in EEG models, allowing decentralized training of neural networks without sharing sensitive data and improving models’ generalization by taking advantage of information distributed across multiple devices. Combining transformers and federated learning in EEG biometrics can offer more accurate, secure, and scalable systems with applications in continuous authentication and access control in dynamic environments.
Finally, it is important to highlight Hybrid approaches combining learning and optimization algorithms have been shown to improve the robustness and accuracy of EEG-based biometric systems. Evolutionary algorithms such as genetic algorithms (GA) and particle swarm optimization (PSO) have been successfully applied to optimize convolutional neural network architectures, improving convergence speed and generalization. Bayesian optimization (BO) has also been used to tune hyperparameters in EEG classification models, thereby increasing efficiency in training deep networks. Furthermore, hybrid feature engineering (integrating features extracted by deep learning with features obtained by traditional feature extraction, such as power spectral density and wavelet entropy) has demonstrated enhanced discriminative ability. Furthermore, metaheuristic algorithms such as ant colony optimization (ACO) and differential evolution (DE) have been used for feature selection, reducing computational cost while maintaining high accuracy [12,68].

4.2. Emotion Recognition from EEG Signals

Emotions are responses to excitations accompanied by physiological changes that predispose to action. They differ from feelings by their high intensity in a short period of time. Emotion recognition is an open field of research not only because of the limited volume of data sets but also because of the diversity and complexity of the expressiveness of emotions in some individuals. Emotions can also be associated with diseases.
Emotion classification has mentioned four primary emotions, but according to some authors, there are six primary emotions. A preliminary study on emotions found that the theory is correct, and the six primary emotions are sadness, surprise, anger, fear, happiness, and contempt [69]. Secondary emotions are combinations of primary emotions. Emotions can be represented graphically using valence, arousal, and dominance in a Cartesian plane.
Valence is a fundamental dimension in constructing emotional experiences, representing the motivational aspect of emotions, ranging from pleasant to unpleasant states. It originates from distinct neurobiological structures, where one activates the appetitive motivational system and the other the defensive system [70]. This distinction has been widely observed in humans, primates, and other mammals through functional magnetic resonance imaging (fMRI) [71]. Closely related to valence, arousal reflects the level of energy expenditure during an emotional response and corresponds to sympathetic activation. Studies indicate that arousal is often influenced by valence, as the engagement of the appetitive or defensive system tends to increase arousal levels. Meanwhile, dominance is the most recently conceptualized dimension and refers to perceived control over emotional responses. This function, which involves inhibiting or sustaining behavioral reactions, is associated with more evolved brain structures responsible for response inhibition, contextual evaluation, and planning [72].
The interaction between emotions and EEG signals is fundamental in biometric identification. Emotional states can cause variations in the patterns of EEG signals, which affects the stability and reliability of biometric systems based on EEG signals. Therefore, recognition from emotional signals is relevant to be treated within biometric identification systems based on EEG signals.
One of the significant limitations in the advancement of the study of emotions from EEG signals is the collection of data since these require the use of paradigms that induce the evocation of emotions and the use of electrode systems that are usually not easy to install. Although there are various public databases, they can be considered limited and have a high degree of diversity to have a true certainty of the generality of machine learning models designed to recognize emotional states from EEG signals. Therefore, having large datasets is one of the major challenges in this field. The database most reported in the studies are as follows:
i) DEAP dataset This database contains EEG signals and peripheral physiological signals from 32 participants. Each participant watched 40 musical videos (1-minute excerpts) that were rated for levels of arousal, valence, liking, dominance, and familiarity. Among the 32 participants, 50% were female, aged between 19 and 37, while the male participants had a mean age of 26.9. Peripheral recordings included EOG, 4 EMG signals (from the zygomatic major and trapezius muscles), GSR, BVP (blood volume pressure), temperature, and respiration. Additionally, facial recordings were taken from 22 participants’ frontal faces. The signals were recorded from 32 channels at a sampling rate of 512 Hz. [73].
ii) MAHNOB database includes 27 participants, consisting of 11 males and 16 females. EEG signal recordings were taken from 32 channels at a sampling rate of 256 Hz. Additionally, videos of the participants’ faces and bodies were recorded using six cameras at 60 frames per second (fps), eye gaze data was recorded at 60Hz, and audio was recorded at a sampling rate of 44.1 KHz.During the recording, 20 videos were presented to the participants, and for each video, emotional keywords, arousal, valence, dominance, and predictability were assessed using a rating scale ranging from 1 to 9. [74].
iii) The SEED database consists of EEG signals from 7 men and eight women, with an average age between 23 and 27 years and a standard deviation of 2.37. For this database, 15 clips from Chinese movies were selected as stimulus material to generate positive, negative, and neutral emotions. Each experiment consists of 15 trials. The EEG signals are captured using a cap following the international 10-20 system for 62 channels. These signals are preprocessed and filtered at 200 Hz. The labels for each signal correspond to (-1 for negative, 0 for neutral, and +1 for positive) emotions [75].
iv) The LUMED-2 (the Loughborough University Multimodal Emotion Database-2) is a dataset of multimodal emotions containing simultaneous multimodal data from 13 participants (6 women and 7 men) while presented with audiovisual stimuli. The total duration of all stimuli is 8 minutes and 50 seconds, consisting of short video clips selected from the web to provoke specific emotions. After each session, participants were asked to label the clips with the emotional states they experienced while watching them. Three different emotions resulted from the labeling: "sad," "neutral," and "happy". The participants’ facial expressions were captured using a webcam at a resolution of 640x480 and 30 frames per second (fps). EEG data from the participants were captured using an 8-channel wireless EEG device called ENOBIO, with a temporal resolution of 500 Hz. The EEG data were filtered for the frequency range [0, 75Hz], and baseline subtraction was applied for each window. Regarding peripheral physiological data, an EMPATICA E4 wristband, powered by Bluetooth, was used to record the participants’ GSR (Galvanic Skin Response) [76].
The datasets described above have been used for various research studies in data science for emotion identification and, more recently, for biometric identification from EEG signals. These databases offer a wide variety of physiological signals and emotional stimuli to train and evaluate classification models. These diverse datasets are essential for the advancement of emotion recognition systems, as they represent real-world scenarios and a variety of emotional expressions. However, with the limitation mentioned above of true generality in the construction of predictive models for emotion identification, considering the diversity between datasets in terms of sampling frequency, acquisition equipment, and paradigm, among others. However, researchers have experimented with these datasets by applying various pre-processing methods, feature extraction techniques, and classification models to improve the accuracy and generality of emotion recognition systems.
Table 5 presents an overview of emotion prediction from EEG signals by presenting each study’s different preprocessing, feature extraction, and classification techniques. Preprocessing steps include a variety of filtering techniques, such as Laplacian surface filtering, Butterworth filters, and blind source separation, as well as more advanced methods, such as artifact subspace reconstruction (ASR). For feature extraction, approaches such as wavelet transform, principal component analysis (PCA), higher-order spectral analysis (HOSA), and entropy-based methods are used. Classification methods vary, with techniques such as linear discriminant analysis (LDA), support vector machines (SVM), k-nearest neighbors (K-NN), quadratic discriminant analysis (QDA), and fuzzy cognitive maps (FCM) commonly applied. The reported methods have demonstrated high performance in terms of accuracy for identifying emotional states from EEG signals, highlighting the effectiveness of various combinations of preprocessing, feature extraction, and classification for emotion recognition. On the other hand, it is important to highlight the efforts made to simplify the identification of emotions by limiting the number of channels, such as the study carried out in [77] where they achieved a performance of 91.88% accuracy with the MAHNOB database and using support vector machines for classification.

4.3. Emotion-Aware Biometric Identification

The electroencephalogram (EEG) is a signal that captures information about brain activity. EEG signals are non-stationary, meaning they change over time and are influenced by factors such as human emotions, thoughts, and activities [88]. Emotion-Aware Biometric Identification is an emerging and relatively unexplored area that seeks to enhance biometric identification systems by incorporating emotional states, e.g., in [89] ECG signals were applied to biometric identification and emotion identification. While traditional biometric systems focus solely on stable physiological signals, recent studies have investigated how emotions can influence these signals, particularly EEG data, to improve the robustness of identification methods [90].
Some approaches in this field have utilized datasets that include conditions such as driving fatigue and various emotional states, as well as artificially induced brain responses like rapid serial visual presentation (RSVP). However, while these datasets involve conditions of fatigue and emotions, these factors are not explicitly analyzed for their impact on biometric identification. For example, [91] presents a convolutional neural network model, GSLT-CNN, which directly processes raw EEG data without requiring feature engineering. This model was evaluated on a dataset of 157 subjects across four experiments. In contrast, [92] focuses on using olfactory stimuli, such as specific aromas, to evoke emotional responses that affect brainwave patterns, thereby uncovering unique aspects of individual identity.
Other research efforts have leveraged emotion-specific datasets designed to capture variations in emotional states for biometric identification purposes. These datasets enable researchers to identify distinct neural responses associated with emotions, providing valuable insights into how emotional context can be integrated into biometric systems [90,93]. Although still in its early stages, emotion-aware biometric identification has the potential to create adaptive, resilient systems that account for the dynamic nature of human emotions.
This area of research has been controversial. Zhang et al. [94] investigated the use of emotional EEG for identification purposes and found that emotion did not affect identification accuracy when using 12-second EEG segments. However, as mentioned by Wang et al. [90], the robustness of this method across different emotional states was not verified.

4.4. Ethical Considerations

The deployment of EEG-based emotion recognition systems entails significant ethical considerations, particularly concerning managing and privacy of emotional data derived from individuals’ unique and sensitive brain patterns. Such data could be misused to manipulate or influence individuals without their explicit consent, leading to risks of exploitation, discrimination, and mass surveillance. Therefore, it is essential to implement robust data protection measures, including anonymization and encryption, and establish clear regulatory frameworks defining acceptable uses of these technologies [95,96]. Biometric data, including EEG-derived information, is governed by various national and international regulations to protect individual privacy and rights. For instance, the European Union’s General Data Protection Regulation (GDPR) classifies biometric data as sensitive personal information. It imposes strict requirements for its processing, including obtaining explicit consent from individuals and restricting data use to legitimate, specified purposes. Similarly, the California Consumer Privacy Act (CCPA) recognizes neural data as sensitive personal information, subjecting it to stringent data protection obligations. At the international level, frameworks such as the United Nations Guiding Principles on Business and Human Rights provide ethical guidelines to prevent the misuse of biometric technologies. Adhering to these regulations safeguards users and fosters trust in developing innovative EEG-based technologies, promoting their responsible adoption and mitigating negative social consequences, such as emotion-based discrimination or mass surveillance[97,98].

5. Conclusions and Future Work

EEG-based biometric identification and emotion recognition face multiple challenges that impact their effectiveness and practical application. For biometric identification, a primary difficulty lies in the high variability of EEG features across sessions for the same individual, which complicates consistent identification [52]. Additional challenges include limitations related to the number of channels and temporal windows used and the risk of overfitting in deep learning models [21]. The complex data collection and computational requirements inherent in multi-channel EEG setups further complicate the deployment of these systems [99]. Moreover, identifying robust features from non-stationary EEG signals that are sufficiently discriminatory remains a significant hurdle [60], along with fundamental concerns around privacy, user-friendliness, and authentication standards [100]. Similarly, EEG-based emotion recognition encounters unique obstacles, particularly due to the variability of EEG signals between individuals, which challenges model generalization across unseen subjects [101]. Issues in data processing, generalizability, and the integration of these models into human-computer interaction frameworks present further difficulties [102]. The neural complexity of emotions and individual differences add another layer of complexity to emotion recognition models [103]. Furthermore, the use of single features, redundant signals, and the high number of channels required for effective recognition limit the accuracy and portability of these systems [104,105]. The feature redundancy and computational demands complicate implementation in wearable devices, underscoring the need for efficient, channel-optimized solutions [106]. In this context, Emotion-Aware Biometric Identification emerges as an innovative yet challenging approach, aiming to integrate emotional states into biometric systems to enhance robustness and adaptability. These systems could achieve more accurate and personalized identification by incorporating emotion recognition, especially in dynamic environments. However, achieving reliable emotion-aware biometric identification requires addressing additional challenges, such as emotional variability across sessions and individual differences in emotional expression. Future research should focus on optimizing feature selection methods to manage both the non-stationary nature of EEG signals and the influence of transient emotional states. Developing lightweight, high-performance models that integrate biometric and emotional data could open new avenues for secure, adaptive identification systems, particularly in applications where user engagement and real-time adaptability are critical.
Additionally, it is important to highlight that emotions are not the only factors significantly influencing biometric identification from EEG signals. Therefore, future work should consider multiple contextual elements that may affect the dynamics of EEG signals, such as the individual’s health status, the presence of neurological or psychiatric disorders, the level of fatigue, substance use, age [28], and even environmental factors such as noise or lighting. Ignoring these sources of variability could compromise the robustness and generalization of identification models, so future research should consider these aspects to develop systems that are more resilient and adaptive to the diversity of human conditions.

Author Contributions

M.A.B., C.D, C.M.and A.C: conceptualization, methodology, validation, formal analysis, investigation, resources, writing–original draft preparation, writing–review and editing; M.A.B., L.S and E.D: conceptualization, methodology, investigation, supervision; M.A.B., C.M.: methodology, investigation, resources, writing–review and editing, supervision, project administration, funding acquisition

Funding

This work was supported by Institución Universitaria Pascual Bravo.

Acknowledgments

The authors would like to thank the contributions of the research project titled "Framework de fusión de la información orientado a la protección de fraudes usando estrategias adaptativas de autenticación y validación de identidad," supported by Institución Universitaria Pascual Bravo.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zapata, J.C.; Duque, C.M.; Gonzalez, M.E.; Becerra, M.A. Data Fusion Applied to Biometric Identification – A Review. Advances in Computing and Data Sciences 2017, 721, 721–733. [Google Scholar] [CrossRef]
  2. Zhong, Wenxiaoa, b;An, Xingweia;Di, Yanga;Zhang, Lixina, b;Ming, Donga, b. Review on identity feature extraction methods based on electroencephalogram signals 2021. 38. [CrossRef]
  3. Belhadj, F. Biometric system for identification aBelhadj, F. (2017). Biometric system for identification and authentication.nd authentication 2017.
  4. Moreno-Revelo, M.; Ortega-Adarme, M.; Peluffo-Ordoñez, D.; Alvarez-Uribe, K.; Becerra, M. Comparison among physiological signals for biometric identification; Vol. 10585 LNCS, 2017. [CrossRef]
  5. Alyasseri, Z.A.A.; Alomari, O.A.; Makhadmeh, S.N.; Mirjalili, S.; Al-Betar, M.A.; Abdullah, S.; Ali, N.S.; Papa, J.P.; Rodrigues, D.; Abasi, A.K. EEG Channel Selection for Person Identification Using Binary Grey Wolf Optimizer. IEEE Access 2022, 10, 10500–10513. [Google Scholar] [CrossRef]
  6. Abdi Alkareem Alyasseri, Z.; Alomari, O.A.; Al-Betar, M.A.; Awadallah, M.A.; Hameed Abdulkareem, K.; Abed Mohammed, M.; Kadry, S.; Rajinikanth, V.; Rho, S. EEG Channel Selection Using Multiobjective Cuckoo Search for Person Identification as Protection System in Healthcare Applications. Computational Intelligence and Neuroscience 2022, 2022, 1–18. [Google Scholar] [CrossRef]
  7. R.A., R.S.E.T.A.W.A.R. Deep Learning Approaches for Personal Identification Based on EGG Signals. Lecture Notes on Data Engineering and Communications Technologies 2022, 100, 30–39. [CrossRef]
  8. Hendrawan, M.A.; Saputra, P.Y.; Rahmad, C. Identification of optimum segment in single channel EEG biometric system. Indonesian Journal of Electrical Engineering and Computer Science 2021, 23, 1847–1854. [Google Scholar] [CrossRef]
  9. Sakalle A., Tomar P.a., B.H.A.D.B.A. A LSTM based deep learning network for recognizing emotions using wireless brainwave driven system 2021. [CrossRef]
  10. Galvão F., Alarcão S.M.., F.M. Predicting exact valence and arousal values from EEG 2021. 21. [CrossRef]
  11. Özerdem, M.S.; Polat, H. Emotion recognition based on EEG features in movie clips with channel selection. Brain Informatics 2017, 4, 241–252. [Google Scholar] [CrossRef]
  12. Maiorana, E. Deep learning for EEG-based biometric recognition. Neurocomputing 2020, 410, 374–386. [Google Scholar] [CrossRef]
  13. Prabowo, D.W.; Nugroho, H.A.; Setiawan, N.A.; Debayle, J. A systematic literature review of emotion recognition using EEG signals. Cognitive Systems Research 2023, 82, 101152. [Google Scholar] [CrossRef]
  14. Wang, Q.; Wang, M.; Yang, Y.; Zhang, X. Multi-modal emotion recognition using EEG and speech signals. Computers in Biology and Medicine 2022, 149, 105907. [Google Scholar] [CrossRef]
  15. Kouka, N.; Fourati, R.; Fdhila, R.; Siarry, P.; Alimi, A.M. EEG channel selection-based binary particle swarm optimization with recurrent convolutional autoencoder for emotion recognition. Biomedical Signal Processing and Control 2023, 84, 104783. [Google Scholar] [CrossRef]
  16. Huang, G.; Hu, Z.; Chen, W.; Zhang, S.; Liang, Z.; Li, L.; Zhang, L.; Zhang, Z. M3CV: A multi-subject, multi-session, and multi-task database for EEG-based biometrics challenge. NeuroImage 2022, 264, 119666. [Google Scholar] [CrossRef] [PubMed]
  17. Oikonomou, V.P. Human Recognition Using Deep Neural Networks and Spatial Patterns of SSVEP Signals 2023. 23. [CrossRef]
  18. BALCI, F. DM-EEGID: EEG-Based Biometric Authentication System Using Hybrid Attention-Based LSTM and MLP Algorithm. Traitement Du Signal 2023, 40, 1–14. [Google Scholar] [CrossRef]
  19. Bak, S.J.; Jeong, J. User Biometric Identification Methodology via EEG-Based Motor Imagery Signals. IEEE Access 2023, XX. [Google Scholar] [CrossRef]
  20. |, J.O..K.M.; Pereda, J.F.G..E. Brainprint based on functional connectivity and asymmetry indices of brain regions: A case study of biometric person identification with non-expensive electroencephalogram headsets 2023. [CrossRef]
  21. Alsumari, W.; Hussain, M.; Alshehri, L.; Aboalsamh, H.A. EEG-Based Person Identification and Authentication Using Deep Convolutional Neural Network. Axioms 2023, 12. [Google Scholar] [CrossRef]
  22. TajDini, M.; Sokolov, V.; Kuzminykh, I.; Ghita, B. Brainwave-based authentication using features fusion. Computers and Security 2023, 129, 103198. [Google Scholar] [CrossRef]
  23. Cui, G.; Li, X.; Touyama, H. Emotion recognition based on group phase locking value using convolutional neural network. Scientific Reports 2023, 13, 1–9. [Google Scholar] [CrossRef]
  24. Khubani, J.; Kulkarni, S. Inventive deep convolutional neural network classifier for emotion identification in accordance with EEG signals. Social Network Analysis and Mining 2023, 13. [Google Scholar] [CrossRef]
  25. Zali-Vargahan, B.; Charmin, A.; Kalbkhani, H.; Barghandan, S. Deep time-frequency features and semi-supervised dimension reduction for subject-independent emotion recognition from multi-channel EEG signals. Biomedical Signal Processing and Control 2023, 85, 104806. [Google Scholar] [CrossRef]
  26. Vahid, A.; Arbabi, E. Human identification with EEG signals in different emotional states. 2016 23rd Iranian Conference on Biomedical Engineering and 2016 1st International Iranian Conference on Biomedical Engineering, ICBME 2016, 2017; 242–246. [Google Scholar] [CrossRef]
  27. Arnau-Gonzalez P.a,Arevalillo-Herraez M.b,Katsigiannis S.a, R.N. On the Influence of Affect in EEG-Based Subject Identification. IEEE Transactions on Affective Computing 2021, 12, 391–401. [CrossRef]
  28. Kaur, B.; Kumar, P.; Roy, P.P.; Singh, D. Impact of Ageing on EEG Based Biometric Systems. In Proceedings of the 2017 4th IAPR Asian Conference on Pattern Recognition (ACPR). IEEE, 11; 2017; pp. 459–464. [Google Scholar] [CrossRef]
  29. Diamond, M.C.; Scheibel, A.B.; Elson, L.M. libro de trabajo el Cerebro Humano 2014.
  30. Romeo Urrea, H. El Dominio de los Hemisferios Cerebrales. Ciencia Unemi 2015, 3, 8–15. [Google Scholar] [CrossRef]
  31. Buzsáki, G.; Draguhn, A. Neuronal Oscillations in Cortical Networks. Science 2004, 304, 1926–1929. [Google Scholar] [CrossRef]
  32. Hestermann, E.; Schreve, K.; Vandenheever, D. Enhancing Deep Sleep Induction Through a Wireless In-Ear EEG Device Delivering Binaural Beats and ASMR: A Proof-of-Concept Study. Sensors 2024, 24, 7471. [Google Scholar] [CrossRef] [PubMed]
  33. Chen, S.; Tan, Z.; Xia, W.; Gomes, C.A.; Zhang, X.; Zhou, W.; Liang, S.; Axmacher, N.; Wang, L. Theta oscillations synchronize human medial prefrontal cortex and amygdala during fear learning. Science Advances 2021, 7. [Google Scholar] [CrossRef]
  34. Mikicin, M.; Kowalczyk, M. Audio-Visual and Autogenic Relaxation Alter Amplitude of Alpha EEG Band, Causing Improvements in Mental Work Performance in Athletes. Applied Psychophysiology and Biofeedback 2015, 40, 219–227. [Google Scholar] [CrossRef]
  35. Lundqvist, M.; Herman, P.; Warden, M.R.; Brincat, S.L.; Miller, E.K. Gamma and beta bursts during working memory readout suggest roles in its volitional control. Nature Communications 2018, 9, 1–12. [Google Scholar] [CrossRef] [PubMed]
  36. Hsu, H.H.; Yang, Y.R.; Chou, L.W.; Huang, Y.C.; Wang, R.Y. The Brain Waves During Reaching Tasks in People With Subacute Low Back Pain: A Cross-Sectional Study. IEEE Transactions on Neural Systems and Rehabilitation Engineering 2025, 33, 183–190. [Google Scholar] [CrossRef]
  37. Subha, D.P.; Joseph, P.K.; U, R.A.; Lim, C. EEG Signal Analysis: A Survey. Journal of Medical Systems 2010, 34, 195–212. [Google Scholar] [CrossRef]
  38. Zhang, H.; Zhou, Q.Q.; Chen, H.; Hu, X.Q.; Li, W.G.; Bai, Y.; Han, J.X.; Wang, Y.; Liang, Z.H.; Chen, D.; et al. The applied principles of EEG analysis methods in neuroscience and clinical neurology. Military Medical Research 2023, 10, 67. [Google Scholar] [CrossRef]
  39. Colominas, M.A.; Schlotthauer, G.; Torres, M.E. Improved complete ensemble EMD: A suitable tool for biomedical signal processing. Biomedical Signal Processing and Control 2014, 14, 19–29. [Google Scholar] [CrossRef]
  40. Jaipriya, D.; Sriharipriya, K.C. Brain Computer Interface-Based Signal Processing Techniques for Feature Extraction and Classification of Motor Imagery Using EEG: A Literature Review. Biomedical Materials & Devices 2024, 2, 601–613. [Google Scholar] [CrossRef]
  41. Egorova, L.; Kazakovtsev, L.; Vaitekunene, E. Nonlinear Features and Hybrid Optimization Algorithm for Automated Electroencephalogram Signal Analysis; 2024; pp. 233–243. [CrossRef]
  42. A. K. Jain, A.R.; Nandakumar, K. An introduction to biometrics. International Conference on Pattern Recognition 2008, pp. 1–1. [CrossRef]
  43. Ibrahim, H. EEG-Based Biometric Close-Set Identification Using CNN-ECOC-SVM 2021. pp. 723–732. [CrossRef]
  44. Chi Qin Lai, Haidi Ibrahim, Shahrel Azmin Suandi, M.Z.A. Convolutional Neural Network for Closed-Set Identification from Resting State Electroencephalography 2022. 10, 3442–3442. [CrossRef]
  45. Yingnan Sun, Frank P.-W. Lo, B.L. EEG-based user identification system using 1D-convolutional long short-term memory neural networks 2019. 125, 259–267. [CrossRef]
  46. Bhawna Kaliraman, Sweety Nain, Rashmi Verma, Yash Dhankhar, P.B.H. Pre-processing of EEG signal using Independent Component Analysis 2022. pp. 1–5. [CrossRef]
  47. Masato Yamashita1, Minoru Nakazawa1, Yukinobu Nishikawa1, N.A. Examination and It’s Evaluation of Preprocessing Method for Individual Identification in EEG 2020. pp. 239–246. [CrossRef]
  48. Bhawna, K.; Priyanka.; Duhan, M. Electroencephalogram Based Biometric System: A Review. Lecture Notes in Electrical Engineering 2021, 668, 57–77. [CrossRef]
  49. Divya Acharya1, Mansi Lende2, Kartavya Lathia3, Sanjana Shirgurkar4, Nikhil Kumar, Sakshi Madrecha5, A.B. Comparative Analysis of Feature Extraction Technique on EEG-Based Dataset 2020. pp. 405–416. [CrossRef]
  50. Kamaraju, S.P.; Das, K.; Pachori, R.B. EEG Based Biometric Authentication System Using Multivariate FBSE Entropy, 2023. [CrossRef]
  51. Ortega-Rodríguez, J.; Gómez-González, J.F.; Pereda, E. Selection of the Minimum Number of EEG Sensors to Guarantee Biometric Identification of Individuals. Sensors 2023, 23, 4239. [Google Scholar] [CrossRef] [PubMed]
  52. M. Benomar, Steven Cao, Manoj Vishwanath, Khuong Quoc Vo, H.C. Investigation of EEG-Based Biometric Identification Using State-of-the-Art Neural Architectures on a Real-Time Raspberry Pi-Based System 2022. 22, 9547–9547. [CrossRef]
  53. Tian, W.; Li, M.; Hu, D., Multi-band Functional Connectivity Features Fusion Using Multi-stream GCN for EEG Biometric Identification; 2023; pp. 3196–3203. [CrossRef]
  54. Kralikova, I.; Babusiak, B.; Smondrk, M. EEG-Based Person Identification during Escalating Cognitive Load. Sensors 2022, 22, 7154. [Google Scholar] [CrossRef] [PubMed]
  55. Wibawa, A.D.; Mohammad, B.S.Y.; Fata, M.A.K.; Nuraini, F.A.; Prasetyo, A.; Pamungkas, Y. Comparison of EEG-Based Biometrics System Using Naive Bayes, Neural Network, and Support Vector Machine. In Proceedings of the 2022 International Conference on Electrical and Information Technology (IEIT). IEEE, 9 2022, pp. 408–413. [CrossRef]
  56. Hendrawan, M.A.; Rosiani, U.D.; Sumari, A.D.W. Single Channel Electroencephalogram (EEG) Based Biometric System. In Proceedings of the 2022 IEEE 8th Information Technology International Seminar (ITIS). IEEE, 10 2022, pp. 307–311. [CrossRef]
  57. Lai, C.Q.; Ibrahim, H.; Abdullah, M.Z.; Suandi, S.A. EEG-Based Biometric Close-Set Identification Using CNN-ECOC-SVM; 2022; pp. 723–732. [CrossRef]
  58. Jijomon, C.M.; Vinod, A.P. EEG-based Biometric Identification using Frequently Occurring Maximum Power Spectral Features. In Proceedings of the 2018 IEEE Applied Signal Processing Conference (ASPCON). IEEE, 12 2018, pp. 249–252. [CrossRef]
  59. Waili, T.; Johar, M.G.M.; Sidek, K.A.; Nor, N.S.H.M.; Yaacob, H.; Othman, M. EEG Based Biometric Identification Using Correlation and MLPNN Models. International Journal of Online and Biomedical Engineering (iJOE) 2019, 15, 77–90. [Google Scholar] [CrossRef]
  60. Jijomon Chettuthara Monsy, A.P.V. EEG-based biometric identification using frequency-weighted power feature. The Institution of Engineering and Technology 2020, 9, 251–258. [Google Scholar] [CrossRef]
  61. Mishra, A.; Bhateja, V.; Gupta, A.; Mishra, A.; Satapathy, S.C., Feature Fusion and Classification of EEG/EOG Signals; 2019; pp. 793–799. [CrossRef]
  62. Barra, S.; Casanova, A.; Fraschini, M.; Nappi, M., EEG/ECG Signal Fusion Aimed at Biometric Recognition; 2015; pp. 35–42. [CrossRef]
  63. Zhang, X.; Yao, L.; Huang, C.; Gu, T.; Yang, Z.; Liu, Y. DeepKey. ACM Transactions on Intelligent Systems and Technology 2020, 11, 1–24. [Google Scholar] [CrossRef]
  64. Moreno-Rodriguez, J.C.; Ramirez-Cortes, J.M.; Atenco-Vazquez, J.C.; Arechiga-Martinez, R. EEG and voice bimodal biometric authentication scheme with fusion at signal level. In Proceedings of the 2021 IEEE Mexican Humanitarian Technology Conference (MHTC). IEEE, 4 2021, pp. 52–58. [CrossRef]
  65. Du, Y.; Xu, Y.; Wang, X.; Liu, L.; Ma, P. EEG temporal–spatial transformer for person identification. Scientific Reports 2022, 12, 14378. [Google Scholar] [CrossRef]
  66. Lin, L.; Zhao, Y.; Meng, J.; Zhao, Q. A Federated Attention-Based Multimodal Biometric Recognition Approach in IoT. Sensors 2023, 23, 6006. [Google Scholar] [CrossRef]
  67. Delvigne, V.; Wannous, H.; Vandeborre, J.P.; Ris, L.; Dutoit, T. Spatio-Temporal Analysis of Transformer based Architecture for Attention Estimation from EEG. In Proceedings of the 2022 26th International Conference on Pattern Recognition (ICPR). IEEE, 8 2022, pp. 1076–1082. [CrossRef]
  68. Moctezuma, L.A.; Molinas, M. Towards a minimal EEG channel array for a biometric system using resting-state and a genetic algorithm for channel selection. Scientific Reports 2020, 10, 14917. [Google Scholar] [CrossRef]
  69. Carla, F.; Yanina, W.; Daniel Gustavo, P. ¿Cuántas Son Las Emociones Básicas? Anuario de Investigaciones 2017, 26, 253–257. [Google Scholar]
  70. LeDoux, J.E. Emotion Circuits in the Brain. Annual Review of Neuroscience 2000, 23, 155–184. [Google Scholar] [CrossRef]
  71. Bradley, M.M. Natural selective attention: Orienting and emotion. Psychophysiology 2009, 46, 1–11. [Google Scholar] [CrossRef]
  72. Carlos Gantiva, K.C. CARACTERISTICAS DE LA RESPUESTA EMOCIONAL GENERADA POR LAS PALABRAS: UN ESTUDIO EXPERIMENTAL DESDE LA EMOCIÓN Y LA MOTIVACIÓN 2016. 10, 55–62.
  73. Koelstra, S.; Muhl, C.; Soleymani, M.; Lee, J.S.; Yazdani, A.; Ebrahimi, T.; Pun, T.; Nijholt, A.; Patras, I. DEAP: A Database for Emotion Analysis ;Using Physiological Signals, 2012. [CrossRef]
  74. M. Soleymani, J. Lichtenauer, T.P.; Pantic, M. A Multimodal Database for Affect Recognition and Implicit Tagging 2012. 3, 42–55. [CrossRef]
  75. Zheng, W.L.; Guo, H.T.; Lu, B.L. Revealing critical channels and frequency bands for emotion recognition from EEG with deep belief network. In Proceedings of the 2015 7th International IEEE/EMBS Conference on Neural Engineering (NER). IEEE, apr 2015, pp. 154–157. [CrossRef]
  76. Erhan Ekmekcioglu, Y.C. Loughborough University Multimodal Emotion Dataset-2, 2020.
  77. Li, J.W.; Lin, D.; Che, Y.; Lv, J.J.; Chen, R.J.; Wang, L.J.; Zeng, X.X.; Ren, J.C.; Zhao, H.M.; Lu, X. An innovative EEG-based emotion recognition using a single channel-specific feature from the brain rhythm code method. Frontiers in Neuroscience 2023, 17. [Google Scholar] [CrossRef]
  78. Murugappan, M.; Ramachandran, N.; Sazali, Y. Classification of human emotion from EEG using discrete wavelet transform. Journal of Biomedical Science and Engineering 2010, 03, 390–396. [Google Scholar] [CrossRef]
  79. Lee, Y.Y.; Hsieh, S. Classifying Different Emotional States by Means of EEG-Based Functional Connectivity Patterns. PLoS ONE 2014, 9, e95415. [Google Scholar] [CrossRef]
  80. Iacoviello, D.; Petracca, A.; Spezialetti, M.; Placidi, G. A real-time classification algorithm for EEG-based BCI driven by self-induced emotions. Computer Methods and Programs in Biomedicine 2015, 122, 293–303. [Google Scholar] [CrossRef] [PubMed]
  81. Kumar, N.; Khaund, K.; Hazarika, S.M. Bispectral Analysis of EEG for Emotion Recognition. Procedia Computer Science 2016, 84, 31–35. [Google Scholar] [CrossRef]
  82. Zhang, Y.; Zhang, S.; Ji, X. EEG-based classification of emotions using empirical mode decomposition and autoregressive model. Multimedia Tools and Applications 2018, 77, 26697–26710. [Google Scholar] [CrossRef]
  83. Garcia-Martinez, B.; Fernandez-Caballero, A.; Alcaraz, R.; Martinez-Rodrigo, A. Application of Dispersion Entropy for the Detection of Emotions With Electroencephalographic Signals. IEEE Transactions on Cognitive and Developmental Systems 2022, 14, 1179–1187. [Google Scholar] [CrossRef]
  84. DAŞDEMİR, Y.; YILDIRIM, E.; YILDIRIM, S. Emotion Analysis using Different Stimuli with EEG Signals in Emotional Space. Natural and Engineering Sciences 2017, 2, 1–10. [Google Scholar] [CrossRef]
  85. Singh, M.I.; Singh, M. Development of low-cost event marker for EEG-based emotion recognition. Transactions of the Institute of Measurement and Control 2017, 39, 642–652. [Google Scholar] [CrossRef]
  86. Nakisa, B.; Rastgoo, M.N.; Rakotonirainy, A.; Maire, F.; Chandran, V. Long Short Term Memory Hyperparameter Optimization for a Neural Network Based Emotion Recognition Framework. IEEE Access 2018, 6, 49325–49338. [Google Scholar] [CrossRef]
  87. Sovatzidi, G.; Iakovidis, D.K. Interpretable EEG-Based Emotion Recognition Using Fuzzy Cognitive Maps; 2023. [CrossRef]
  88. Ong, Z.Y.; Saidatul, A.; Vijean, V.; Ibrahim, Z. Non Linear Features Analysis between Imaginary and Non-imaginary Tasks for Human EEG-based Biometric Identification. IOP Conference Series: Materials Science and Engineering 2019, 557, 012033. [Google Scholar] [CrossRef]
  89. Brás, S.; Ferreira, J.H.T.; Soares, S.C.; Pinho, A.J. Biometric and Emotion Identification: An ECG Compression Based Method. Frontiers in Psychology 2018, 9. [Google Scholar] [CrossRef]
  90. Wang, Y.; Wu, Q.; Wang, C.; Ruan, Q. DE-CNN: An Improved Identity Recognition Algorithm Based on the Emotional Electroencephalography. Computational and Mathematical Methods in Medicine 2020, 2020, 1–12. [Google Scholar] [CrossRef]
  91. Chen, J.X.; Mao, Z.J.; Yao, W.X.; Huang, Y.F. EEG-based biometric identification with convolutional neural network. Multimedia Tools and Applications 2020, 79, 10655–10675. [Google Scholar] [CrossRef]
  92. Pandharipande, M.; Chakraborty, R.; Kopparapu, S.K. Modeling of Olfactory Brainwaves for Odour Independent Biometric Identification. In Proceedings of the 2023 31st European Signal Processing Conference (EUSIPCO). IEEE, 9 2023, pp. 1140–1144. [CrossRef]
  93. Duque-Mejía, C.; Castro, A.; Duque, E.; Serna-Guarín, L.; Lorente-Leyva, L.L.; Peluffo-Ordóñez, D.; Becerra, M.A. Methodology for biometric identification based on EEG signals in multiple emotional states; [Metodología para la identificación biométrica a partir de señales EEG en múltiples estados emocionales]. RISTI - Revista Iberica de Sistemas e Tecnologias de Informacao 2023, 2023, 281–288. [Google Scholar]
  94. Zhang, D.; Yao, L.; Zhang, X.; Wang, S.; Chen, W.; Boots, R. Cascade and Parallel Convolutional Recurrent Neural Networks on EEG-based Intention Recognition for Brain Computer Interface, 2021. arXiv:cs.HC/1708.06578].
  95. Kumar, A.; Kumar, A. DEEPHER: Human Emotion Recognition Using an EEG-Based DEEP Learning Network Model. Engineering Proceedings 2021, 10, 32. [Google Scholar] [CrossRef]
  96. McCall, I.C.; Wexler, A. Peering into the mind? The ethics of consumer neuromonitoring devices; 2020; pp. 1–22. [CrossRef]
  97. Kiran, A.; Ahmed, A.B.G.E.; Khan, M.; Babu, J.C.; Kumar, B.P.S. An efficient method for privacy protection in big data analytics using oppositional fruit fly algorithm. Indonesian Journal of Electrical Engineering and Computer Science 2025, 37, 670. [Google Scholar] [CrossRef]
  98. Green, D.J.; Barnes, T.A.; Klein, N.D. Emotion regulation in response to discrimination: Exploring the role of self-control and impression management emotion-regulation goals. Scientific Reports 2024, 14, 26632. [Google Scholar] [CrossRef] [PubMed]
  99. Sumari, M.A.H.U.D.R.A.D.W. Single Channel Electroencephalogram (EEG) Based Biometric System. Information Technology International Seminar (ITIS), 2022. https://ieeexplore.ieee.org/document/10010103.
  100. Jalaly Bidgoly, A.; Jalaly Bidgoly, H.; Arezoumand, Z. A survey on methods and challenges in EEG based authentication. Computers and Security 2020, 93, 101788. [Google Scholar] [CrossRef]
  101. Jie Zhu, Tiecheng Song, H.C. Subject-Independent EEG Emotion Recognition Based on Genetically Optimized Projection Dictionary Pair Learning 2023. 13, 977–977. [CrossRef]
  102. Rajamanickam Yuvaraj, A. Amalin Prince, M.M. Emotion Recognition from Spatio-Temporal Representation of EEG Signals via 3D-CNN with Ensemble Learning Techniques. Emerging Trends of Biomedical Signal Processing in Intelligent Emotion Recognition 2023, 13, 685–685. [Google Scholar] [CrossRef]
  103. Xiaopeng Si, Dong Huang, Yulin Sun, He Huang, D.M. Transformer-based ensemble deep learning model for EEG-based emotion recognition. Brain science advances 2023, 9. https://doi.org/https://www.doi.org/10.26599/bsa.2023.9050016. [CrossRef]
  104. Zhihao Qu, X.Z. EEG Emotion Recognition Based on Temporal and Spatial Features of Sensitive signals. Journal of Electrical and Computer Engineering 2022, 2022, 5130184–1. [Google Scholar] [CrossRef]
  105. Yerim Ji, S.Y.D. Deep learning-based self-induced emotion recognition using EEG. Frontiers in neuroscience 2022, 16. [Google Scholar] [CrossRef]
  106. Xin Deng, Xiangwei Lv, Pengfei Yang, Ke Liu, K.S. Emotion Recognition Method Based on EEG in Few Channels. Data Driven Control and Learning Systems (DDCLS) 2022, pp. 1291–1296. https://ieeexplore.ieee.org/document/9858390.
Figure 1. Framework for Understanding EEG-Based Biometric Methods and Emotional Influences
Figure 1. Framework for Understanding EEG-Based Biometric Methods and Emotional Influences
Preprints 154368 g001
Figure 2. Number of Publications in Scopus
Figure 2. Number of Publications in Scopus
Preprints 154368 g002
Figure 3. Representation of cerebral hemispheres
Figure 3. Representation of cerebral hemispheres
Preprints 154368 g003
Table 1. Feature Extraction from EEG Signals
Table 1. Feature Extraction from EEG Signals
Ref. Database Feature Extraction Clasification Method Accuracy Year
[17] They have used two SSVEP datasets for pi, the speller dataset and the EPOC dataset auto-regressive (AR) modeling, power spectral density (PSD) energy of EEG channels, wavelet packet decomposition (WPD), and phase locking values (PLV). combines common spatial patterns with specialized deep-learning neural networks. recognition rate of 99 2023
[18] The paper doesn’t mention the name of the dataset; it only includes citation 32. Upon reviewing the article, it appears that the dataset used is BCI200 The system uses a Random Forest based binary feature selection method to filter out meaningless channels and determine the optimum number of channels for the highest accuracy Hybrid Attention-based LSTM-MLP 99.96% and 99.70% accuracy percentages for eyes-closed and eyes-open datasets 2023
[19] The authors used the dataset of ’Big Data of 2-classes MI’ and Dataset IVa In this study, we used CSP, ERD/S, AR, and FFT to transform segmented data into informative features. The TDP method is excluded from this work because it is suitable for motor execution rather than motor imagination SVM, GNB SVM (CSP (98.97%), ERD/S (98.94%), AR (98.93%), and FFT (97.92%)).GNB (CSP (97.47%), ERD/S (94.58%), FFT(53.80%), and AR (50.24%)). 2023
[20] Dataset I was the main one and con-sisted of a self-collected dataset using a non-expensive EEG device. Dataset II was used to test the proposed method with a large number of subjects. This is a widely used dataset from PhysioNet BCI [41]. EEG signals were processed using the FieldTrip toolbox for Matlab. The toolbox provides various useful tools to process EEG, MEG, and invasive electrophysiological data. EEG signals were processed by first applying a baseline correction relative to the mean voltage, and then a finite impulse response (FIR) bandpass filter from 5 to 40 Hz for noise reduction. These preprocessing steps were necessary to smooth the classification procedures and remove or minimize undesired noise nuisance. Support Vector Machines (SVM), Neural Networks (NN), and Discriminant Analysis (DA). identification accuracy rates of up to 100% with a low-cost EEG device 2023
Table 2. Feature Extraction from EEG Signals
Table 2. Feature Extraction from EEG Signals
Ref. Database Feature Extraction Clasification Method Accuracy Year
[21] dataset uses only two EEG channels and a signal measured over a short temporal window of 5 seconds CNN identification result of 99% and an equal error rate of authentication performance of 0.187%. 2023
[22] The data is collected from 50 volunteer 1) spectral information, 2) coherence, 3) mutual correlation coefficient, and 4) mutual information. SVM authentication error rate (ERR) was found to be 0.52%, with a classification rate of 99.06%. 2023
[23] DEAP phase locking value (PLV) CNN 85% 2023
[24] SEED and DEAP The proposed model uses an Inventive brain optimization algorithm and frequency features to enhance detection accuracy. optimized deep convolutional neural network (DCNN) K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Random Forest (RF), and Deep Belief Network (DBN). (DCNN) model achieved an accuracy of 97.12% at 90% of training and 96.83% according to K-fold analysis 2023
[25] SEED https://github.com/ heibaipei/DE-CNNv in this link we can find the code of this article The proposed method consists of the following steps: Obtaining the time-frequency content of EEG signals using the modified Stockwell transform. Extracting deep features from each channel’s time-frequency content using a deep convolutional neural network. Fusing the reduced features of all channels to construct the final feature vector. Utilizing semi-supervised dimension reduction to reduce the features. CNNs The Inception-V3 CNN and support vector machine (SVM) classifier ... 2023
Table 3. Feature Extraction from EEG Signals
Table 3. Feature Extraction from EEG Signals
Ref. Database Feature Extraction Clasification Method Accuracy Year
[26] DEAP 10-fold cross-validation has been employed for all experiments and scenarios. Sequential floating forward feature (SFFS) selection has been used to select the best features for classification Support Vector Machine (SVM) with Radial Basis Function (RBF) kernel has been applied for classification In our study the CCR is in the range of 88%-99%, whilst the Equal Error Rate (EER) in the aforementioned research is in the range of 15%-35% using SVM 2017
[27] DEAP, MAHNOB-HCI, and SEED The feature extraction process involved the use of time-domain and frequency-domain features classification algorithms used were Support Vector Machines (SVM), Random Forest (RF), and k-Nearest Neighbors (k-NN). ..... 2021
[28] They collected theirs data – 60 users using Emotiv Epoc+ the signals are filtered by Savitzky-Golay filter to attenuate its short term variations Hidden Markov Model (HMM) based temporal classifier and Support Vector Machine (SVM) User identification performance of 97.50% and 93.83% have been recorded with HMM and SVM classifiers, respectively. 2017
Table 4. Feature Extraction and Classification Techniques for Biometric Identification
Table 4. Feature Extraction and Classification Techniques for Biometric Identification
Ref. Preprocessing Database Feature extraction Biometric classification Accuracy
[50] Multivariate variational mode decomposition (MVMD) Own database (35 subjects) Fourier-Bessel series expansion-based (FBSE) entropies K-NN 93.4±7.0%
[51] FieldTrip, bandpass 4-40Hz, beta frequency band 13-30 Hz Own database (13 subjects) and PhysioNet BCI (109 subjects) PCA, Wilcoxon test, fast Fourier transform, Power Spectrum (PS), Asymmetry index RBF-SVM, K-fold, Cross-validation 99.9±1.39%
[52] PREP pipeline, notch filter, standardScaler, high pass filter 1Hz, low pass filter 50Hz The BED (Biometric EEG Dataset) 21 subjects PCA, Wilcoxon test, optimal spatial filtering Deep learning (DL) 86.74%
[53] - - Functional connectivity (FC) Multi-stream GCN (MSGCN) 98.05%
[54] Notch filter, Bandpass filter, common average reference (CAR) Own database (21 subjects) 1D-CNN Cross 5-fold, LDA, SVM, K-NN, DL 99%
[55] Finite Impulse Response (FIR), Automatic Artifact Removal EOG (AAR-EOG), Artifact Subspace Reconstruction (ASR), and Independent Component Analysis (ICA) Own database (43 subjects) Power Spectral Density (PSD) Naive Bayes, Neural Network, SVM 97.7%
[56] ICA, Butterworth filter Own database (8 subjects) Power Spectral Density (PSD) from delta (0.5–4Hz), theta (4–8Hz), alpha (8–14Hz), beta (14–30Hz), gamma (30–50Hz) bands, LDA K-NN, SVM 80%
[57] - PhysioBank database (109) CNN CNN-ECOC-SVM 98.49%
[58] Matlab edfread, 7.5 second window PhysioNet database (109) Power spectral, PSD, Mean Correlation Coefficient (MCC) Proposed method by the author Error rate of 0.016
[59] 2nd order Butterworth filter Own database (6 subjects) Daubechies (db8) wavelet, PSD Multilayer Perceptron Neural Network (MLPNN) 75.8%
[60] Matlab edfread PhysioNet database (16 subjects) Frequency-weighted power (FWP) Proposed method by the author EER of 0.0039
Table 5. Feature Extraction and Classification Techniques for Emotion Identification
Table 5. Feature Extraction and Classification Techniques for Emotion Identification
Cite Year Preprocessing Extraction and selection Emotion clasification
[78] 2010 Filtro de superficie Laplaceano Transformada wavelet, Fuzzy C Means (FCM) y Fuzzy K-Means (FKM) Linear Discriminant Analysis (LDA) y K Nearest Neighbor (K-NN)
[79] 2014 FFT, EEGLAB Correlación, Coherencia y sincronización de fase Análisis de discriminante cuadrático
[80] 2015 Filtro wavelet PCA SVM
[81] 2015 Blind source separation, Filtro de paso de banda 4.0-45.0 Hz HOSA (Higher order Spectral Analysis) LS-SVM, Artificial Neural Networks (ANN)
[81] 2016 Filtro Butterworth Análisis Biespectral con HOSA SVM
[82] 2016 Algoritmo basado en Análisis Independiente de Componentes Entropía de muestras, Entropía Cuadrática, Distribución de Entropía SVM
[83] 2016 Algoritmo basado en Análisis de Componentes Independientes Entropía de muestras, Entropía Cuadrática, Distribución de Entropía SVM
[84] 2017 MARA, AAR Valor de bloqueo de fase (PLV) con ANOVA para medir significancia SVM
[85] 2017 Filtro de superficie Laplaceano Transformada wavelet SVM Polinomial
[86] 2018 Filtro Butterworth y Notch Algoritmos ACA, SA, GA, SPO SVM
[77] 2023 DWT, EMD Smoothed pseudo-Wigner–Ville distribution (RSPWVD) K-NN, SVM, LDA y LR
[87] 2023 Finite Impulse Response, Artefact Subspace Reconstruction (ASR) Power Spectral Density (PSD) Naïve Bayes (NB), K-NN, SVM, Fuzzy Cognitive Map (FCM)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated