
Enhancing Brain-Computer Interface Performance and Security through Advanced Artificial Intelligence Techniques
- 1 Guangdong University of Technology, Guangdong, China
* Author to whom correspondence should be addressed.
Abstract
The brain-computer interface has become a rapidly developing field, but it has also brought many problems with its development. The main issues are the sparse amount of brain-computer interface data, the inaccurate decoding and classification of data, and the data security of the brain-computer interface. With the development of artificial intelligence, artificial intelligence also provides solutions to many problems. This study mainly uses artificial intelligence algorithms to solve these problems. This paper reviews the integration of artificial intelligence techniques—specifically transfer learning, generative adversarial networks (GANs), Transformer models, and federated learning—to address critical challenges in brain-computer interfaces (BCIs), including data scarcity, classification accuracy, and data security. The hybrid model has many outstanding performances in solving the brain-computer interface problem, and this paper mainly mentions the joint extraction of spatiotemporal features of the CNN-Transformer to make up for the shortcomings of a single model and improve the overall performance. The GAN-TL hybrid model can effectively reduce the influence of individual differences on the model. This paper illustrates the advantages of the hybrid model, which is also the main direction of future research. It highlights how hybrid AI models significantly enhance BCI performance while outlining current limitations and future research directions to ensure robust, efficient, and secure BCI applications.
Keywords
brain-computer interfaces, Convolutional neural networks, Generate adversarial networks, Federated Learning, Transformer
[1]. G. Pfurtscheller, G. R. Müller-Putz, R. Scherer and C. Neuper, "Rehabilitation with brain–computer interface systems", Computer, vol. 41, no. 10, pp. 58-65, 2008.
[2]. D. Wu, Y. Xu and B. -L. Lu, "Transfer Learning for EEG-Based Brain–Computer Interfaces: A Review of Progress Made Since 2016," in IEEE Transactions on Cognitive and Developmental Systems, vol. 14, no. 1, pp. 4-19, March 2022, doi: 10.1109/TCDS.2020.3007453.
[3]. C. Ju, D. Gao, R. Mane, B. Tan, Y. Liu and C. Guan, "Federated Transfer Learning for EEG Signal Classification," 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 2020, pp. 3040-3045, doi: 10.1109/EMBC44109.2020.9175344.
[4]. J. Wang et al., "Using Determinant Point Process in Generative Adversarial Networks for SSVEP Signals Synthesis," 2023 45th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Sydney, Australia, 2023, pp. 1-4, doi: 10.1109/EMBC40787.2023.10340247.
[5]. A. Craik, Y. He and J. L. Contreras-Vidal, "Deep learning for electroencephalogram (EEG) classification tasks: A review", J. Neural Eng., vol. 16, Jun. 2019.
[6]. G.Zhang, V. Davoodnia, A. Sepas-Moghaddam, Y. Zhang and A. Etemad, "Classification of hand movements from EEG using a deep attention-based LSTM network", IEEE Sensors J., vol. 20, no. 6, pp. 3113-3122, Mar. 2020.
[7]. H. Altaheri, G. Muhammad and M. Alsulaiman, "Physics-Informed Attention Temporal Convolutional Network for EEG-Based Motor Imagery Classification," in IEEE Transactions on Industrial Informatics, vol. 19, no. 2, pp. 2249-2258, Feb. 2023, doi: 10.1109/TII.2022.3197419.
[8]. J. Xie et al., "A Transformer-Based Approach Combining Deep Learning Network and Spatial-Temporal Information for Raw EEG Classification," in IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 30, pp. 2126-2136, 2022, doi: 10.1109/TNSRE.2022.3194600.
[9]. Y. Song, Q. Zheng, B. Liu and X. Gao, "EEG Conformer: Convolutional Transformer for EEG Decoding and Visualization," in IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 31, pp. 710-719, 2023, doi: 10.1109/TNSRE.2022.3230250.
[10]. T. Jia, L. Meng, S. Li, J. Liu and D. Wu, "Federated Motor Imagery Classification for Privacy-Preserving Brain-Computer Interfaces," in IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 32, pp. 3442-3451, 2024, doi: 10.1109/TNSRE.2024.3457504.
[11]. The 'Sandwich' meta-framework for architecture agnostic deep privacy-preserving transfer learning for non-invasive brainwave decoding Xiaoxi Wei, Jyotindra Narayan and A Aldo Faisal”The 'Sandwich' meta-framework for architecture agnostic deep privacy-preserving transfer learning for non-invasive brainwave decoding”Published 23 January 2025 • © 2025 The Author(s). Published by IOP Publishing Ltd Journal of Neural Engineering, Volume 22, Number 1Citation Xiaoxi Wei et al 2025 J. Neural Eng. 22 016014DOI 10.1088/1741-2552/ad9957
[12]. Wang, Z., Shi, N., Zhang, Y. et al. Conformal in-ear bioelectronics for visual and auditory brain-computer interfaces. Nat Commun 14, 4213 (2023). https://doi.org/10.1038/s41467-023-39814-6
[13]. Y. Zhang, T. Zhang, Z. Huang, J. Yang, A New Class of Electronic Devices Based on Flexible Porous Substrates. Adv. Sci. 2022, 9, 2105084. https://doi.org/10.1002/advs.202105084
Cite this article
Liu,W. (2025). Enhancing Brain-Computer Interface Performance and Security through Advanced Artificial Intelligence Techniques. Applied and Computational Engineering,154,1-6.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Volume title: Proceedings of CONF-SEML 2025 Symposium: Machine Learning Theory and Applications
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).