Applications of machine learning in neuroscience and inspiration of reinforcement learning for computational neuroscience

Research Article
Open access

Applications of machine learning in neuroscience and inspiration of reinforcement learning for computational neuroscience

Weihang Jiang 1*
  • 1 International School of Information Science & Engineering,Dalian University of Technology,Dalian, Liaoning Province, China, 116620    
  • *corresponding author is0652kf@ed.ritsumei.ac.jp
ACE Vol.4
ISSN (Print): 2755-273X
ISSN (Online): 2755-2721
ISBN (Print): 978-1-915371-55-3
ISBN (Online): 978-1-915371-56-0

Abstract

High-performance machine learning algorithms have always been one of the concerns of many researchers. Since its birth, machine learning has been a product of multidisciplinary integration. Especially in the field of neuroscience, models from related fields continue to inspire the development of neural networks and deepen people's understanding of neural networks. The mathematical and quantitative modeling approach to research brought about by machine learning is also feeding into the development of neuroscience. One of the emerging products of this is computational neuroscience. Computational neuroscience has been pushing the boundaries of models of brain function in recent years, and just as early studies of visual hierarchy influenced neural networks, computational neuroscience has great potential to lead to higher performance machine learning algorithms, particularly in the development of deep learning algorithms with strong links to neuroscience. In this paper, it first reviews the help and achievements of machine learning for neuroscience in recent years specially in fMRI image recognition and look at the possibilities for the future development of neural networks due to the recent development of the computational neuroscience in psychiatry of the temporal difference model for dopamine and serotonin.

Keywords:

machine learning, neuroscience, computational neuroscience, decision making, reinforcement learning

Jiang,W. (2023). Applications of machine learning in neuroscience and inspiration of reinforcement learning for computational neuroscience . Applied and Computational Engineering,4,473-478.
Export citation

1. Introduction

Machine learning has been around for decades. It was created with a combination of applications from multiple disciplines such as psychology, biology, neurophysiology, mathematics, automation, and computer science. High-performance means accuracy and efficiency. Implementing high-performance machine learning algorithms means a step closer to true artificial intelligence.

To this day, as the computing power of computers has increased, it has made the once theoretical machine learning possible to apply. Its application has been reflected in many aspects of life. For example, facial recognition technology helps to use biometric information to identify individuals and protect them. It is obvious that when taking a flight at the airport or pay at the mall, facial recognition has gradually replaced traditional paper-based tools for more efficient and secure identity verification. On major media sites, recommendation algorithms recommendation algorithm accurately completes the task of classifying more ambiguous movies through feature values, unique hot codes, etc., providing accurate positioning services to viewers and content creators[1].

Apart from the applications in engineering. The most significant role of machine learning algorithms is also in helping traditional disciplines update their research methods. Machine learning has been in turn acting on the many fields that gave birth to it, such as neuroscience. It helps traditional neuroscience to build a more efficient and model-based research process such as data analysis and making mathematical and quantitative modeling approaches to study neuroscience. Conversely, a clearer understanding of neurology is also helping the field of machine learning to build more accurate and abstract models of neural networks.

In this paper, the research methods are mainly summarized and analyzed through the existing literature and cases. As two branches of traditional machine learning, this article will introduce the scope of unsupervised learning and supervised learning in neuroscience and as an emerging machine learning algorithm, the application prospects of reinforcement learning in computational neuroscience through a specific case of dopamine and serotonin. It will facilitate understanding the advantages and disadvantages of unsupervised and supervised learning in neuroscience applications, selects more appropriate algorithms, and helps inspire the role of reinforcement learning in computational neuroscience.

2. The application of traditional machine learning methods in neuroscience

2.1. Data analysis

Since the successful implementation of machine learning for the recognition of handwritten numbers, the application of machine learning for image recognition has become more widespread. A wide range of widely used mathematical models, such as nonlinear regression and discriminant models, data reduction models, and nonlinear dynamical systems included in neural networks, have had a huge impact on data analysis. [2]. Besides the neural networks which tried to build an artificial neural network, the traditional machine learning approach which is based on mathematical algorithms also helps neuroscience a lot in dealing with high dimensional data [3]. In the field of neuroscience, it is not uncommon that high-dimensional data. Since high throughput recording techniques are widely used in neuroscience, it has become a challenge to analyze the recorded data efficiently and accurately. Commonly analyzed data include synaptic connectivity data at the micro level [4], EEG data at the more macro level, and fMRI at the macro level.

Machine learning allows for more accurate identification of the high dimensional data and the noise generated during measurement, using two main types of methods: the unsupervised approach, which tries to find the corresponding data model through known classes or objects, and the supervised approach, where the main method of machine recognition of high-dimensional data is a function that converts high-dimensional input data into low-dimensional output [3], that is, a function that parameterizes such high-dimensional data and optimizes the parameters in it to make the high-dimensional input data more accurately transformed into low- dimensional input data.

fMRI

In this paper, fMRI is used as the object of data analysis for machine learning in neuroscience. fMRI is an important method in neuroscience research. fMRI is a method of using MRI to visualize the hemodynamic responses associated with neuronal activity. Because fMRI is non-invasive, there are no radiation exposure issues and, more appealingly, it allows for accurate localization of functional brain activation areas. Using fMRI scans, researchers can distinguish the brains of Alzheimer's patients from those of normal people [5]. Resting state magnetic resonance imaging (rs-fMRI) is likewise a widely used imaging method in neuroscience study. It measures spontaneous fluctuations in neuropod oxygen level-dependent signals throughout the brain.

Unsupervised approaches

Many unsupervised learning methods are widely used in the data analysis of rs-fMRI. The main purpose is to descend the brain into different discrete functional areas [6] and by using unsupervised learning methods in rs-fMRI, they have found that even though spatial, temporal, and ethnic groups are different, the functional areas of the brain still have similar vibration models in the population, which means that the use of machine learning methods can help perform the classification and identification functions of fMRI. Unsupervised learning algorithms do not need to deal with labels and are more of a “learning process” than a “recognition process”.

Although labor-saving, it may produce inaccurate results and unsupervised learning algorithms do not help researchers understand the structure of learning to some extent. This may not help us to understand the learning process behind the results. Also, in doing unsupervised approaches, prior knowledge is needed [3].

Supervised approaches

Supervised approaches such as Regularized linear models and SVMs also played an important role in classification and recognition function of fMRI [6]. The core idea of the supervised learning approaches are to parameterize the high-dimensional data and optimize the parameters in it so that the high-dimensional input data can be more accurately converted to low-dimensional input data. [3]

Supervised learning is highly accurate, but this approach requires human beings to make labels to aid learning. In contrast to unsupervised learning algorithms, Supervised learning is more of a “recognition process” than a “learning process”. Such a recognition process based on the researchers may not shed more light on future research, because at this point the machine is following established rules for recognition processing.

Summary of data analysis

All these approaches demonstrate the advantages of machine learning in the analysis of high-dimensional data for neuroscience. While supervised learning and unsupervised learning have their own advantages, it is inevitable that the pursuit of timesaving and labor-saving will sacrifice the understanding of the model, which is more like an impervious "black box". The cost of pursuing accurate results is the sacrifice of human labor for labeling, and it is also difficult to achieve breakthroughs for existing models.

2.2. Understanding of neural structure

In addition to its widespread use in fMRI data analysis, machine learning plays a variety of important roles in the broader field of neuroscience, including solving real-world engineering problems, identifying as well as predicting variables to explore exactly which variables are related to a particular function, testing simple models as it is used in fMRI, and acting as a model for the brain.[7].

loss function

The loss function has an important place in machine learning. In practical applications, it is often used as a learning indicator for optimization problems., solving and evaluating models by minimizing the loss function. This research methods brought by machine learning have led researchers to pay attention to whether there is a loss function in the brain

Regarding the loss function in brain, there are three basic hypothesizes according to [8], namely that loss functions do exist in the human body, the constructs of the organism conform to a minimum number of loss functions, and these constructs are embedded in a pre-structured architecture. The brain, as the nerve center of human, is capable of processing visual, auditory, olfactory, gustatory, and tactile information including sensory information, but the existing machine learning algorithms, whether convolutional neural networks or natural language processing, are limited in their functions. Therefore, it is reasonable to believe that there may be an optimal loss function behind the brain, as the most complex organ evolved from biology, which can process complex information efficiently and accurately.

Computational neuroscience

The main idea of computational neuroscience is to study the nervous system at different levels using mathematical methods and computer simulations. From the actual physical model of neurons, computational neuroscience can explore the dynamic interactions of neurons, the construction of neural networks, and the quantitative analysis theory of brain organization and neural system. From the perspective of computers, we can simulate and explore the process and ability of brain information processing and explore the mechanism and way of creating brain-like processing for new information.

There is no doubt that the development of machine learning has an important role in the development of computational neuroscience [9]. This mathematical approach continues to inspire computational neuroscience to make hypotheses and generate new research paradigms about the commonalities of neural systems.

3. Reinforcement Learning

Recently, a new machine learning paradigm is being extensively studied. This learning paradigm is called reinforcement learning. Reinforcement learning is one of the paradigms and methods of machine learning. Reinforcement learning emphasizes the exploration of strategic aspects of problem solving and learning processes, such as reward and punishment mechanisms. Through these strategies, it is possible to obtain continuous feedback in the interaction with the environment and to adjust the strategies to achieve the maximum reward or to achieve a specific goal.

In addition to supervised and unsupervised learning, reinforcement learning may be a viable way to study neuroscience these days. Reinforcement learning theory, focuses on learning, and attempts to maintain a balance between exploration-utilization. Unlike supervised and unsupervised learning, reinforcement learning does not require any prior data, but rather interacts with the environment, gets feedback, and adjusts the learning strategy positively or negatively to maximize the gain.

The learning approach of reinforcement learning has inspired thinking about computational neuroscience. A recent dopamine serotonin model using the td learning sheds new light on future research targets and methods in computational neuroscience.

3.1. Temporal difference model

Temporal Time-difference learning (TD learning) refers to a class of model-free reinforcement learning methods that continuously learn as they estimate the current value function. Typical approaches can sample by interaction with the environment, such as Monte Carlo methods, or update based on current estimates, such as dynamic programming methods. Such a model decomposes decision making into two processes: a learning process, which updates the decision evaluation method, and a decision process, which selects a decision based on the evaluation method. Temporal differential learning simulates (or experiences) a sequence or episode in which, at each action step, the value of the state before execution is estimated based on the value of the new state.

Such a process is very appropriate for the study of computational neuroscience [10] and, computational psychiatry. This is because the difference between certain mental disorders and normal people manifests in the decision-making process [8], and the cause of Parkinson's disease has been suggested to be due to abnormal dopamine secretion [11]. In addition, the perceptual (learning)-decision process exhibited by such a TD learning is also consistent with the way humans learn. Therefore, the application of TD learning can more accurately simulate people's decision-making process. Taking Parkinson's disease as an example, the difference in perceived dopamine secretion leads to different decisions in the TD model. The quantitative study of these differences in hormone levels may enable us to understand the causes of the differences in decision-making.

3.2. Competing-Critics model: an improvement of TD learning

Traditional TD learning is regarded as risk neutral because it concentrates on maximizing average returns. Therefore, all returns are weighted equally, regardless of size. Not risk-sensitive Linear with a single forecast error but cannot track multidimensional errors.

In addition to the effect of dopamine on Parkinson's disease, serotonin has recently been shown to have an opposite inhibitory effect on dopamine. Abnormal dopamine secretion alone cannot explain abnormal decision-making in psychiatric patients, but if one can start from TD learning and update both dopamine and serotonin states along with the state update, it can help to understand more clearly the cause of abnormal decision making. Therefore, an improved version of TD learning named the competing-critics model is proposed [1].

There are three reasons for proposing this improved model.

Reason 1:

The update for positive and negative errors is asymmetric. From the evidence that has been found, dopamine transients are more responsive to positive errors than to negative errors. Biological evidence can explain the cause of this phenomenon. It is not possible for the firing rate of dopamine neurons to decrease the magnitude as much as it increases positively to decrease error. That is, the magnitude of the positive increase is greater than the magnitude of the negative decrease, which is an asymmetric relationship

Reason 2:

The sensitivity to risk embodied in the risk-sensitive learning model is thought to be the difference between normal people and people with mental illness

Reason 3:

A recent study has revealed the multidimensional nature of people in the decision-making process. According to this study, serotonin is also involved in the decision-making process. Like dopamine, serotonin can also change in amount during error precognition. But its function is the opposite of that of dopamine. Therefore, we need a model that can track errors in multiple dimensions.

Based on TD learning, the competing critics model breaks down decision making into a learning process and a decision process. The learning process involves a set of competing optimistic critics and pessimistic critics. The names optimistic and pessimistic are related to the role of dopamine and serotonin decision making. Dopamine excites the nerves, making them more dangerous. Serotonin inhibits nerves, making them less dangerous. The decision-making process eventually integrates the predictions of each system and concludes.

The simulations for this model were practically applied in a gambling experiment. From the results, updates of the two learning systems during the decision process produced a linkage with transient changes in dopamine and serotonin, providing evidence for the role of serotonin as a rival to dopamine.

4. Conclusion

Machine learning methods have an important role in the study of neuroscience. Both supervised and unsupervised learning methods have their own advantages and disadvantages, but it is undeniable that these two methods are currently the two most widely used approaches to machine learning, and they have an essential role for neuroscience in both simple data processing and complex image recognition. The search for loss functions, a research paradigm arising from machine learning, is also influencing the object of study in neuroscience. At the same time, we should also note the role of reinforcement learning for neuroscience, especially computational neuroscience.

In reinforcement learning, the state of reward is more in line with the reward and punishment mechanisms of various hormones in the human body, and do not to go into detail, we all know that paired hormones or neurotransmitters with opposite effects are not uncommon in the nervous system. In the reinforcement learning, reward implies excitatory neurotransmitters and punishment implies inhibitory neurotransmitters in our nervous system. By studying and modeling reward and punishment, we can gain a deeper understanding of the balance behind neurotransmitter action. Compared with supervised learning, reinforcement learning retains the labor-saving feature of unsupervised learning and does not require labels as an aid. At the same time, it compensates for the "opaqueness" of unsupervised learning and allows researchers to learn during the reinforcement learning process.

The case discussed in this paper is limited to serotonin and dopamine, but the reinforcement learning model of more paired antagonistic neurotransmitters in the nerve is still unclear, which can be used as a future research direction.

Similarly, reinforcement learning also faces many difficulties. Compared with the traditional machine learning research method, the advantage of reinforcement learning is that it conforms to the neural reward and punishment mechanism, but if this method is used to study neuroscience, it is more dependent on neuroscience research and experimental results. The development of reinforcement learning need pay more attention to the integration of multiple disciplines.


References

[1]. Enkhtaivan, E., Nishimura, J., Ly, C., & Cochran, A. L. (2021). A competition of critics in human decision-making. Computational Psychiatry, 5(1).

[2]. Sarle, W. S. (1994). Neural networks and statistical models.

[3]. Helmstaedter, M. (2015). The mutual inspirations of machine learning and neuroscience. Neuron, 86(1), 25-28.

[4]. Helmstaedter, M., Briggman, K.L., Turaga, S.C., Jain, V., Seung, H.S., and Denk, W. (2013). Nature 500, 168–174.

[5]. Sarraf Saman, Tofighi Ghassem, and Others. 2016. “DeepAD: Alzheimer′ S Disease Classification via Deep Convolutional Neural Networks Using MRI and fMRI.” bioRxiv, 070441.

[6]. Khosla, M., Jamison, K., Ngo, G. H., Kuceyeski, A., & Sabuncu, M. R. (2019). Machine learning in resting-state fMRI analysis. Magnetic resonance imaging, 64, 101-121.

[7]. Glaser, J. I., Benjamin, A. S., Farhoodi, R., & Kording, K. P. (2019). The roles of supervised machine learning in systems neuroscience. Progress in neurobiology, 175, 126-137.

[8]. Marblestone, A. H., Wayne, G., & Kording, K. P. (2016). Toward an integration of deep learning and neuroscience. Frontiers in computational neuroscience, 94.

[9]. Wiecki, T. V., Poland, J., & Frank, M. J. (2015). Model-based cognitive neuroscience approaches to computational psychiatry: clustering and classification. Clinical Psychological Science, 3(3), 378-399.

[10]. Paulus, M. P. (2020). Driven by pain, not gain: Computational approaches to aversion-related decision making in psychiatry. Biological psychiatry, 87(4), 359-367.

[11]. Frank, Michael J. (2005) Dynamic dopamine modulation in the basal ganglia: a neurocomputational account of cognitive deficits in medicated and nonmedicated Parkinsonism." Journal of cognitive neuroscience 17.1: 51-72.


Cite this article

Jiang,W. (2023). Applications of machine learning in neuroscience and inspiration of reinforcement learning for computational neuroscience . Applied and Computational Engineering,4,473-478.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of the 3rd International Conference on Signal Processing and Machine Learning

ISBN:978-1-915371-55-3(Print) / 978-1-915371-56-0(Online)
Editor:Omer Burak Istanbullu
Conference website: http://www.confspml.org
Conference date: 25 February 2023
Series: Applied and Computational Engineering
Volume number: Vol.4
ISSN:2755-2721(Print) / 2755-273X(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).

References

[1]. Enkhtaivan, E., Nishimura, J., Ly, C., & Cochran, A. L. (2021). A competition of critics in human decision-making. Computational Psychiatry, 5(1).

[2]. Sarle, W. S. (1994). Neural networks and statistical models.

[3]. Helmstaedter, M. (2015). The mutual inspirations of machine learning and neuroscience. Neuron, 86(1), 25-28.

[4]. Helmstaedter, M., Briggman, K.L., Turaga, S.C., Jain, V., Seung, H.S., and Denk, W. (2013). Nature 500, 168–174.

[5]. Sarraf Saman, Tofighi Ghassem, and Others. 2016. “DeepAD: Alzheimer′ S Disease Classification via Deep Convolutional Neural Networks Using MRI and fMRI.” bioRxiv, 070441.

[6]. Khosla, M., Jamison, K., Ngo, G. H., Kuceyeski, A., & Sabuncu, M. R. (2019). Machine learning in resting-state fMRI analysis. Magnetic resonance imaging, 64, 101-121.

[7]. Glaser, J. I., Benjamin, A. S., Farhoodi, R., & Kording, K. P. (2019). The roles of supervised machine learning in systems neuroscience. Progress in neurobiology, 175, 126-137.

[8]. Marblestone, A. H., Wayne, G., & Kording, K. P. (2016). Toward an integration of deep learning and neuroscience. Frontiers in computational neuroscience, 94.

[9]. Wiecki, T. V., Poland, J., & Frank, M. J. (2015). Model-based cognitive neuroscience approaches to computational psychiatry: clustering and classification. Clinical Psychological Science, 3(3), 378-399.

[10]. Paulus, M. P. (2020). Driven by pain, not gain: Computational approaches to aversion-related decision making in psychiatry. Biological psychiatry, 87(4), 359-367.

[11]. Frank, Michael J. (2005) Dynamic dopamine modulation in the basal ganglia: a neurocomputational account of cognitive deficits in medicated and nonmedicated Parkinsonism." Journal of cognitive neuroscience 17.1: 51-72.