
Explore Potential in Bridging of Neuroscience and Deep Learning
- 1 Integrated Circuits design and integrated system, Nanjing University, Suzhou, China
- 2 School of Electronic Engineering, Xidian University, Xi'an, China
- 3 School of Electronic Engineering, Xidian University, Xi'an, China
- 4 TongjiSchool of Electronic and Information Engineering, Tongji University, Shanghai, China
- 5 TongjiSchool of Electronic and Information Engineering, Tongji University, Shanghai, China
* Author to whom correspondence should be addressed.
Abstract
Neuroscience has tight connections with machine learning, but this relationship isn’t so clear in deep learning. This review explores the bidirectional bridge between deep learning and neuroscience. It reveals how deep learning helps interpret the basic mechanisms of neuroscience and how neuroscience inspires AI scientists to improve algorithms. We review research using deep learning to investigate cognition portions, like grid cells, neuron-astrocytes, and hippocampus. Also, deep learning, mainly Transformers, is improved by modifying and combining with other models. Inspired by neurons, even a new model known as “Thousand Brains” is set up. Finally, we discuss the limitations revealed in how to translate biology action into algorithms. In the future, it is convinced combination of biology function and deep learning which is used to test multiple tasks is a feasible method to explore the basic mechanism of neuroscience and improve algorithms.
Keywords
Neuroscience, Transformers, Deep Learning, Neuron–astrocyte, Hippocampus, Neocortex, Preferential attachment, Redundant Synapse Pruning, Thousand Brains
[1]. A. Kol and I. Goshen, “The memory orchestra: the role of astrocytes and oligodendrocytes in parallel to neurons, ” Current Opinion in Neurobiology, vol. 67, pp. 131–137, Apr. 2021, doi: 10.1016/j.conb.2020.10.022.
[2]. D. Farzanfar, H. J. Spiers, M. Moscovitch, and R. S. Rosenbaum, “From cognitive maps to spatial schemas, ” Nat Rev Neurosci, vol. 24, no. 2, pp. 63–79, Feb. 2023, doi: 10.1038/s41583-022-00655-9.
[3]. J. Hawkins, M. Lewis, M. Klukas, S. Purdy, and S. Ahmad, “A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex, ” Front. Neural Circuits, vol. 12, p. 121, Jan. 2019, doi: 10.3389/fncir.2018.00121.
[4]. L. Evanson, M. Lavrov, I. Kharitonov, S. Lu, and A. S. Kozlov, “Biomimetic computations improve neural network robustness, ” Oct. 31, 2023. doi: 10.1101/2023.10.26.564127.
[5]. M. Schrimpf et al., “The neural architecture of language: Integrative modeling converges on predictive processing, ” Proc. Natl. Acad. Sci. U.S.A., vol. 118, no. 45, p. e2105646118, Nov. 2021, doi: 10.1073/pnas.2105646118.
[6]. C. J. Cueva and X.-X. Wei, “Emergence of grid-like representations by training recurrent neural networks to perform spatial localization, ” 2018, doi: 10.48550/ARXIV.1803.07770.
[7]. L. Songlin, D. Yangdong, and W. Zhihua, “Grid Cells Are Ubiquitous in Neural Networks, ” 2020, arXiv. doi: 10.48550/ARXIV.2003.03482.
[8]. S. S. Mondal, S. Frankland, T. W. Webb, and J. D. Cohen, “Determinantal point process attention over grid cell code supports out of distribution generalization, ” eLife, vol. 12, p. RP89911, Aug. 2024, doi: 10.7554/eLife.89911.3.
[9]. Kozachkov, L., Kastanenka, K. V., & Krotov, D. (2023). Building transformers from neurons and astrocytes. Proceedings of the National Academy of Sciences, 120(34).Available:https://doi.org/10.1073/pnas.2219150120
[10]. J. C. R. Whittington et al., “The Tolman-Eichenbaum Machine: Unifying Space and Relational Memory through Generalization in the Hippocampal Formation, ” Cell, vol. 183, no. 5, pp. 1249-1263.e23, Nov. 2020, doi: 10.1016/j.cell.2020.10.024.
[11]. J. C. R. Whittington, J. Warren, and T. E. J. Behrens, “Relating transformers to models and neural representations of the hippocampal formation, ” 2021, arXiv.doi:10.48550/ARXIV.2112.04035.
[12]. A. Vaswani et al., “Attention Is All You Need, ” Aug. 01, 2023, arXiv: arXiv:1706.03762. Accessed: Jul. 23, 2024. [Online]. Available: http://arxiv.org/abs/1706.03762
[13]. A. Dhurandhar et al., “NeuroPrune: A Neuro-inspired Topological Sparse Training Algorithm for Large Language Models, ” 2024, arXiv. doi: 10.48550/ARXIV.2404.01306.
[14]. K. J. Hole and S. Ahmad, “A thousand brains: toward biologically constrained AI, ” SN Appl. Sci., vol. 3, no. 8, p. 743, Aug. 2021, doi: 10.1007/s42452-021-04715-0.
[15]. M. Lewis, S. Purdy, S. Ahmad, and J. Hawkins, “Locations in the Neocortex: A Theory of Sensorimotor Object Recognition Using Cortical Grid Cells, ” Front. Neural Circuits, vol. 13, p. 22, Apr. 2019, doi: 10.3389/fncir.2019.00022.
[16]. J. Hawkins, S. Ahmad, and Y. Cui, “A Theory of How Columns in the Neocortex Enable Learning the Structure of the World, ” Front. Neural Circuits, vol. 11, p. 81, Oct. 2017, doi: 10.3389/fncir.2017.00081.
Cite this article
Wei,Z.;Liu,X.;Xie,T.;Wang,Z.;Bian,W. (2025). Explore Potential in Bridging of Neuroscience and Deep Learning. Applied and Computational Engineering,132,20-26.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Volume title: Proceedings of the 2nd International Conference on Machine Learning and Automation
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).