EEG-Based Affective Computing: A Review of Signal Processing Techniques

Research Article
Open access

EEG-Based Affective Computing: A Review of Signal Processing Techniques

Linlin Su 1*
  • 1 Huamei-Bond International School, Guangzhou, China, 510520    
  • *corresponding author sulin20070910@163.com
Published on 26 August 2025 | https://doi.org/10.54254/2755-2721/2025.LD26298
ACE Vol.179
ISSN (Print): 2755-273X
ISSN (Online): 2755-2721
ISBN (Print): 978-1-80590-184-6
ISBN (Online): 978-1-80590-129-7

Abstract

As intelligent human-computer interaction (HCI) evolves, the ability of systems to accurately perceive and respond to human emotions has become increasingly crucial. Emotional perception allows machines to adapt and react empathetically, making interactions more natural and engaging. This paper reviews current EEG-based emotion recognition techniques, focusing on key steps such as preprocessing, feature extraction, and machine learning models. Specifically, we explore various models like Support Vector Machine (SVM), Long Short-Term Memory (LSTM) networks, and Deep Belief Networks (DBN), all of which have demonstrated promising results in classifying emotional states from EEG signals. In addition, we compare some of the most recent approaches in the field, including MCD_DA—a method developed at Hebei University of Technology. This technique addresses the challenge of cross-subject adaptation, where recognising emotions in new individuals, not seen during training, is crucial for real-world applications. Many emotion recognition systems struggle with generalizing to new subjects due to individual differences in brainwave patterns. MCD_DA attempts to solve this problem, making the technology more robust and scalable.

Keywords:

Emotion Recognition, EEG Signals, Machine Learning Models

Su,L. (2025). EEG-Based Affective Computing: A Review of Signal Processing Techniques. Applied and Computational Engineering,179,50-55.
Export citation

1. Introduction

As artificial intelligence rapidly progresses, human-computer interaction (HCI) is shifting toward more intelligent and intuitive forms. Traditional HCI systems rely on explicit cues like voice commands, hand gestures, or facial expressions. Nevertheless, emotions, frequently conveyed with nuance, are indispensable for enriching the personalization and efficacy of these exchanges. Affective computing is dedicated to equipping machines with the capacity to discern and react suitably to human emotions.

Among the many physiological signals used in this field, electroencephalogram (EEG) signals have become particularly important due to their fine-grained temporal resolution and direct link to brain activity. This paper explores using EEG for emotion recognition, covering essential steps such as signal preprocessing, feature extraction, and dimensionality reduction. It also discusses how these features are used in machine learning models, focusing on challenges like subject variability and real-time system performance.

In conclusion, this review elucidates contemporary EEG-driven methodologies, underscoring the salient challenges that persist. Furthermore, it delineates prospective research trajectories in domains such as multimodal fusion, individualized affect modeling, and the development of emotionally responsive interactive platforms.

2. EEG signal fundamentals

2.1. Definition and acquisition of EEG signals

EEG signals reflect electrical activity from synchronized neuron firing in the brain’s cortex [1]. They are recorded using electrodes placed on the scalp, following the standard 10–20 system. Emotiv, Neuroscan, and g.tec are commonly used to collect this data in research and clinical settings. These instruments yield multi-channel data, featuring sampling frequencies spanning from 128 Hz to upwards of 1000 Hz. Guaranteeing optimal electrode adherence, diminished impedance, and negligible artifact contamination is paramount for procuring high-caliber data. Within affective research, investigators frequently employ particular stimuli—such as videos or images—to elicit emotional reactions during electroencephalographic recording.

2.2. Characteristics of EEG signals

EEG signals have both time-based and frequency-based characteristics that are useful for emotion analysis:

• Time-domain features include fundamental statistical values like mean, variance, skewness, and kurtosis that describe the overall shape of the signal over time.

• Frequency-domain features are obtained through techniques like Power Spectral Density (PSD), which breaks the signal into standard brainwave bands:

• Delta (1–4 Hz)

• Theta (4–8 Hz)

• Alpha (8–13 Hz)

• Beta (14–30 Hz)

• Gamma (>30 Hz)

Furthermore, differential entropy (DE), a metric quantifying the randomness and distribution of the signal [2], has demonstrated efficacy in differentiating between various affective states, as substantiated by research from Tsinghua University and the Chinese Academy of Sciences.

2.3. Relationship between EEG and emotions

Emotional responses are linked to specific EEG patterns across different brain areas and frequency bands. For instance:

• Positive emotions often correlate with greater alpha activity in the left frontal region.

• Negative emotions show more alpha activity on the right.

• High arousal states are associated with increased theta and beta power.

Studies from universities such as Zhejiang, Tsinghua, and the Chinese Academy of Sciences have consistently shown that EEG signals contain rich emotional information, making them suitable for feature extraction in emotion detection systems.

3. Signal processing techniques in affective computing

3.1. Preprocessing techniques

EEG signals are inherently noisy and prone to various artifacts from both internal (like eye movements) and external sources (like electrical interference). Preprocessing is, therefore, a critical step. It typically involves:

Filtering: A band-pass filter (usually 0.5–45 Hz) removes irrelevant frequencies while keeping brainwave ranges important for emotion and cognition.

Artifact Removal: To mitigate the impact of pervasive artifacts such as electrooculographic (EOG), electromyographic (EMG), and motion-related interference, established methodologies like Independent Component Analysis (ICA) and Principal Component Analysis (PCA) are employed. Furthermore, certain investigations leverage thresholding techniques or template-matching algorithms to streamline the automation of artifact rejection. Prior research has demonstrated that the selection of preprocessing pipelines can exert a substantial influence on classification outcomes, as evidenced by studies conducted on datasets like DEAP.

In practical pipelines, filtering is followed by ICA (often using MATLAB’s EEGLAB) to clean the data. Then, the EEG is divided into overlapping windows, and techniques like Short-Time Fourier Transform (STFT) extract energy features from each frequency band.

3.2. Feature extraction methods

Feature extraction is crucial to determining how well the model can recognize emotions. The extracted features are usually grouped into [3]:

Time-Domain Features: These are rudimentary statistical measures, such as the mean, standard deviation, skewness, and kurtosis, which are computationally efficient but frequently lack the robustness required for effective emotion classification.

Frequency-Domain Features: These parameters are derived through spectral analysis techniques, such as the Fourier Transform, with Power Spectral Density (PSD) serving as a common metric for quantifying the distribution of energy across various frequency bands.

Time-Frequency Features: Given the temporal dynamics of emotional responses, features that encapsulate both timing and frequency are of significant utility. For instance, wavelet packet decomposition facilitates the extraction of features such as wavelet entropy, energy entropy, and the Hurst exponent. In one study, these features were subjected to principal component analysis (PCA) and subsequently inputted into a support vector machine (SVM) classifier, yielding an accuracy of approximately 85%, which outperformed the use of any individual feature in isolation.

3.3. Feature selection and dimensionality reduction

Because EEG data contains many features, some of which may be redundant, it’s necessary to reduce the data’s complexity:

Principal Component Analysis (PCA): This method reduces correlated features into fewer uncorrelated components [4], preserving most of the salient information. It has been effectively employed to consolidate complex features prior to classifier application.

Linear Discriminant Analysis (LDA): In contrast to Principal Component Analysis, Linear Discriminant Analysis leverages class labels to optimize class separability. Its efficacy is particularly pronounced when dealing with well-defined emotional categories, rendering it a favored methodology for applications such as valence classification.

4. Machine learning methods in affective computing

4.1. Traditional machine learning methods

Support Vector Machine (SVM): This model is widely used for EEG-based emotion classification due to its ability to handle small, high-dimensional datasets [5]. For example, research indicates that integrating PCA-reduced features, such as wavelet entropy and the Hurst exponent, can produce accuracy levels of approximately 85%.

Random Forest (RF): Although not the main focus in some studies, Random Forest is often used for its robustness and simplicity. It builds multiple decision trees and is especially useful for noisy or complex data, including multimodal inputs.

4.2. Deep learning methods

Deep Belief Network (DBN): A DBN consists of layers of Restricted Boltzmann Machines (RBMs) and can learn deep features from EEG data. One study used it with PSD features to classify emotions and reached an accuracy of 89.12%, outperforming models like SVM [6].

Convolutional Neural Network (CNN): CNNs excel in discerning spatial arrangements. By representing EEG-derived attributes, such as PSD or DE, as two-dimensional data, CNNs can effectively identify salient patterns, thereby enhancing the precision of emotion classification models.

Recurrent Neural Network (RNN, especially LSTM): Since emotions unfold over time, LSTM networks capture these time-based patterns. Combining CNNs (for spatial features) with LSTMs (for temporal features) has shown strong performance in datasets like DEAP.

5. Case studies of EEG-based affective computing

5.1. Case study: DEAP dataset and hybrid CNN-LSTM architecture

The DEAP dataset is a widely used benchmark that includes EEG and other physiological data from 32 participants watching 40 music videos. After each video, participants rate their emotions based on valence, arousal, dominance, and liking. This dataset supports research on emotion detection across multiple signal types. It has been used to test hybrid deep learning models—such as CNN-LSTM—demonstrating strong performance in modeling emotional responses.

5.2. Case study: cross-subject emotion recognition using MCD_DA

One major challenge in EEG-based emotion recognition is cross-subject generalization. EEG signals are highly personalized, and models trained on one individual often fail to perform well on another [7]. To address this, domain adaptation strategies are introduced.

The Maximum Classifier Discrepancy Domain Adaptation (MCD_DA) method was proposed [8]. This antagonistic methodology harmonizes the feature distributions across source and target domains by diminishing the divergence between predictions derived from two distinct classifiers. Experiments conducted on the SEED dataset confirmed that MCD_DA significantly enhances recognition accuracy under domain shifts, making it promising for real-world applications where calibration-free emotion recognition is desired.

This article presents an exhaustive survey of EEG application in affective computing, with emphasis on the complete processing chain: signal preprocessing, feature derivation, dimensionality curtailment, and classification. It elucidates the pivotal function of signal processing methodologies, which are indispensable for data refinement and the optimization of emotion recognition platforms. Among the mainstream machine learning models discussed, including Support Vector Machine (SVM), Convolutional Neural Networks (CNN), Long Short-Term Memory (LSTM) networks, and Deep Belief Networks (DBN), it is evident that the selection of the appropriate model significantly influences the accuracy and effectiveness of emotional state detection.

In particular, the study highlights the importance of preprocessing and feature extraction techniques in improving model generalization, especially in scenarios where the system needs to adapt to new subjects (cross-subject adaptation) or operate in real-time settings. These tasks involve cleaning and refining EEG signals to remove noise and artifacts, followed by extracting key features such as wavelet entropy, Power Spectral Density (PSD), and differential entropy, which are crucial for accurate emotion classification. Using techniques like PCA or LDA, proper dimensionality reduction can further enhance model efficiency by reducing redundant features, allowing for faster processing without sacrificing accuracy.

Despite the strengths of the reviewed approaches, this study has certain limitations. Some references, particularly those in older sections, may no longer reflect the most recent advances in the field. Additionally, the comparison of different methods lacks a thorough quantitative analysis, which would provide a clearer understanding of the relative performance of various models and techniques. Furthermore, the study lacks a thorough investigation into multimodal fusion—the synergistic combination of EEG data with complementary biosignals, such as heart rate variability or facial electromyography—a strategy that could potentially augment the precision of emotion recognition and afford a more comprehensive characterization of affective states. Future research could address these gaps by focusing on the dynamic and personalized modeling of emotional responses, tailoring the systems to individual users and their unique emotional profiles.

In addition, exploring more advanced neural architectures, such as Graph Neural Networks (GNNs), could bring significant improvements, as GNNs are particularly suited for handling complex relational data like EEG signals involving spatial and temporal dependencies. Enhancing domain adaptation—the ability of emotion recognition models to generalize across different environments or subject populations—would also be crucial for improving the practical applicability of EEG-based systems in diverse, real-world scenarios.

6. Conclusion

In summary, signal processing is still essential for affective computing based on EEG. The effective use of these methods is critical for creating interactive systems that are more emotionally aware, which will improve user experience and expand the possibilities of human-computer interaction. As research advances, incorporating more complex models, multimodal data, and customized strategies will be crucial in making emotion recognition systems more precise, flexible, and useful in practical applications.


References

[1]. Wyler, A. R., Ojemann, G. A., & Ward Jr, A. A. (1982). Neurons in human epileptic cortex: correlation between unit and EEG activity. Annals of Neurology: Official Journal of the American Neurological Association and the Child Neurology Society, 11(3), 301-308.

[2]. Shi, L. C., Jiao, Y. Y., & Lu, B. L. (2013, July). Differential entropy feature for EEG-based vigilance estimation. In 2013, the 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) (pp. 6627-6630). IEEE.

[3]. Jenke, R., Peer, A., & Buss, M. (2014). Feature extraction and selection for emotion recognition from EEG. IEEE Transactions on Affective Computing, 5(3), 327-339.

[4]. Boutsidis, C., Mahoney, M. W., & Drineas, P. (2008, August). Unsupervised feature selection for principal components analysis. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 61-69).

[5]. Wang, X. W., Nie, D., & Lu, B. L. (2011, November). EEG-based emotion recognition using frequency domain features and support vector machines. In International Conference on Neural Information Processing (pp. 734-743). Berlin, Heidelberg: Springer Berlin Heidelberg.

[6]. Sohn, I. (2021). Deep belief network-based intrusion detection techniques: A survey. Expert Systems with Applications, 167, 114170.

[7]. Dose, H., Møller, J. S., Iversen, H. K., & Puthusserypady, S. (2018). An end-to-end deep learning approach to MI-EEG signal classification for BCIs. Expert Systems with Applications, 114, 532-542.

[8]. Saito, K., Watanabe, K., Ushiku, Y., & Harada, T. (2018). Maximum classifier discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3723-3732).


Cite this article

Su,L. (2025). EEG-Based Affective Computing: A Review of Signal Processing Techniques. Applied and Computational Engineering,179,50-55.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of CONF-MLA 2025 Symposium: Intelligent Systems and Automation: AI Models, IoT, and Robotic Algorithms

ISBN:978-1-80590-184-6(Print) / 978-1-80590-129-7(Online)
Editor:Hisham AbouGrad
Conference date: 17 November 2025
Series: Applied and Computational Engineering
Volume number: Vol.179
ISSN:2755-2721(Print) / 2755-273X(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).

References

[1]. Wyler, A. R., Ojemann, G. A., & Ward Jr, A. A. (1982). Neurons in human epileptic cortex: correlation between unit and EEG activity. Annals of Neurology: Official Journal of the American Neurological Association and the Child Neurology Society, 11(3), 301-308.

[2]. Shi, L. C., Jiao, Y. Y., & Lu, B. L. (2013, July). Differential entropy feature for EEG-based vigilance estimation. In 2013, the 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) (pp. 6627-6630). IEEE.

[3]. Jenke, R., Peer, A., & Buss, M. (2014). Feature extraction and selection for emotion recognition from EEG. IEEE Transactions on Affective Computing, 5(3), 327-339.

[4]. Boutsidis, C., Mahoney, M. W., & Drineas, P. (2008, August). Unsupervised feature selection for principal components analysis. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 61-69).

[5]. Wang, X. W., Nie, D., & Lu, B. L. (2011, November). EEG-based emotion recognition using frequency domain features and support vector machines. In International Conference on Neural Information Processing (pp. 734-743). Berlin, Heidelberg: Springer Berlin Heidelberg.

[6]. Sohn, I. (2021). Deep belief network-based intrusion detection techniques: A survey. Expert Systems with Applications, 167, 114170.

[7]. Dose, H., Møller, J. S., Iversen, H. K., & Puthusserypady, S. (2018). An end-to-end deep learning approach to MI-EEG signal classification for BCIs. Expert Systems with Applications, 114, 532-542.

[8]. Saito, K., Watanabe, K., Ushiku, Y., & Harada, T. (2018). Maximum classifier discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3723-3732).