Maze solving problem using q-learning

Research Article
Open access

Maze solving problem using q-learning

Chengcong Xu 1*
  • 1 Boston University    
  • *corresponding author donaldxu@bu.edu
Published on 14 June 2023 | https://doi.org/10.54254/2755-2721/6/20230909
ACE Vol.6
ISSN (Print): 2755-273X
ISSN (Online): 2755-2721
ISBN (Print): 978-1-915371-59-1
ISBN (Online): 978-1-915371-60-7

Abstract

In the recent years, a number of research initiatives have employed q-learning. Because of its straightforward logic of assigning a corresponding action to each potential state, it is the most widely used reinforcement learning technique. In our research, we found a way to expedite the training process for the agent. A simple environment was used, a frozen lake. The goal for the agent is to reach the destination by avoiding obstacles after a series of training. A reasonable example was showed in this research. The basic environment and agent were created in Python, and the basic form of Q-learning was utilized. We implemented a q-learning algorithm to solve a 4x4 frozen lake and a complex 8x8 frozen lake. The results showed that the training process takes a long time and is different in more complex environments. We assume there is an exploration-exploitation tradeoff that can speed up the training process. In this way, we define a new parameter, epsilon, which is used to balance the agent during the training process. Also, among methods of exploration-exploitation tradeoff, exponential decay performs better than linear decay.

Keywords:

exploration-exploitation, epsilon, exponential decay, linear decay.

Xu,C. (2023). Maze solving problem using q-learning. Applied and Computational Engineering,6,1491-1497.
Export citation

References

[1]. Bhatt, S. (2019, April 19). Reinforcement learning 101. Medium. Retrieved August 23, 2022, from https://towardsdatascience.com/reinforcement-learning-101-e24b50e1d292

[2]. Christopher John Cornish Hellaby Watkins. “Learning from Delayed Rewards”. PhD thesis. Cambridge, UK:King’s College, May 1989. Online: http://www.cs.rhul.ac.uk/~chrisw/new_thesis.pdf.

[3]. Epsilon-greedy algorithm in reinforcement learning. (2022, August 23). Retrieved November 15, 2022, from: https://www.geeksforgeeks.org/epsilon-greedy-algorithm-in-reinforcement-learning/

[4]. Solving a maze with Q learning. MitchellSpryn. (n.d.). Retrieved August 23, 2022, from: http://www.mitchellspryn.com/2017/10/28/Solving-A-Maze-With-Q-Learning.html

[5]. Baeldung. (2022, November 11). Epsilon-Greedy Q-Learning. Retrieved November 15, 2022, from: https://www.baeldung.com/cs/epsilon-greedy-q-learning

[6]. Q-learning for beginners. train an AI to solve the frozen lake… | by ... (n.d.). Retrieved November 15, 2022, from: https://towardsdatascience.com/q-learning-for-beginners-2837b777741

[7]. Brownlee, J. (2019, August 06). How to configure the learning rate when training deep learning neural networks. Retrieved November 15, 2022, from: https://machinelearningmastery.com/learning-rate-for-deep-learning-neural-networks/#:~:text=The%20amount%20that%20the%20weights,range%20between%200.0%20and%201.0.

[8]. KarnivaurusKarnivaurus 6, PolBMPolBM 1, Clwainwrightclwainwright 32122 silver badges33 bronze badges, Neil GNeil G 14.1k33 gold badges4343 silver badges8787 bronze badges, Zhenlingcnzhenlingcn 19944 bronze badges, & Shivam KhandelwalShivam Khandelwal 1111 bronze badge. (1963, September 01). Understanding the role of the discount factor in reinforcement learning. Retrieved November 15, 2022, from: https://stats.stackexchange.com/questions/221402/understanding-the-role-of-the-discount-factor-in-reinforcement-learning#:~:text=The%20discount%20factor%20essentially%20determines,that%20produce%20an%20immediate%20reward.

[9]. Arnold, K. (2022, April 11). Q-Table Reinforcement Learning. Retrieved November 15, 2022, from: https://observablehq.com/@kcarnold/q-table-reinforcement-learning

[10]. Exploitation and exploration in machine learning - javatpoint. www.javatpoint.com. (n.d.). Retrieved October 4, 2022, from: https://www.javatpoint.com/exploitation-and-exploration-in-machine-learning


Cite this article

Xu,C. (2023). Maze solving problem using q-learning. Applied and Computational Engineering,6,1491-1497.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of the 3rd International Conference on Signal Processing and Machine Learning

ISBN:978-1-915371-59-1(Print) / 978-1-915371-60-7(Online)
Editor:Omer Burak Istanbullu
Conference website: http://www.confspml.org
Conference date: 25 February 2023
Series: Applied and Computational Engineering
Volume number: Vol.6
ISSN:2755-2721(Print) / 2755-273X(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).

References

[1]. Bhatt, S. (2019, April 19). Reinforcement learning 101. Medium. Retrieved August 23, 2022, from https://towardsdatascience.com/reinforcement-learning-101-e24b50e1d292

[2]. Christopher John Cornish Hellaby Watkins. “Learning from Delayed Rewards”. PhD thesis. Cambridge, UK:King’s College, May 1989. Online: http://www.cs.rhul.ac.uk/~chrisw/new_thesis.pdf.

[3]. Epsilon-greedy algorithm in reinforcement learning. (2022, August 23). Retrieved November 15, 2022, from: https://www.geeksforgeeks.org/epsilon-greedy-algorithm-in-reinforcement-learning/

[4]. Solving a maze with Q learning. MitchellSpryn. (n.d.). Retrieved August 23, 2022, from: http://www.mitchellspryn.com/2017/10/28/Solving-A-Maze-With-Q-Learning.html

[5]. Baeldung. (2022, November 11). Epsilon-Greedy Q-Learning. Retrieved November 15, 2022, from: https://www.baeldung.com/cs/epsilon-greedy-q-learning

[6]. Q-learning for beginners. train an AI to solve the frozen lake… | by ... (n.d.). Retrieved November 15, 2022, from: https://towardsdatascience.com/q-learning-for-beginners-2837b777741

[7]. Brownlee, J. (2019, August 06). How to configure the learning rate when training deep learning neural networks. Retrieved November 15, 2022, from: https://machinelearningmastery.com/learning-rate-for-deep-learning-neural-networks/#:~:text=The%20amount%20that%20the%20weights,range%20between%200.0%20and%201.0.

[8]. KarnivaurusKarnivaurus 6, PolBMPolBM 1, Clwainwrightclwainwright 32122 silver badges33 bronze badges, Neil GNeil G 14.1k33 gold badges4343 silver badges8787 bronze badges, Zhenlingcnzhenlingcn 19944 bronze badges, & Shivam KhandelwalShivam Khandelwal 1111 bronze badge. (1963, September 01). Understanding the role of the discount factor in reinforcement learning. Retrieved November 15, 2022, from: https://stats.stackexchange.com/questions/221402/understanding-the-role-of-the-discount-factor-in-reinforcement-learning#:~:text=The%20discount%20factor%20essentially%20determines,that%20produce%20an%20immediate%20reward.

[9]. Arnold, K. (2022, April 11). Q-Table Reinforcement Learning. Retrieved November 15, 2022, from: https://observablehq.com/@kcarnold/q-table-reinforcement-learning

[10]. Exploitation and exploration in machine learning - javatpoint. www.javatpoint.com. (n.d.). Retrieved October 4, 2022, from: https://www.javatpoint.com/exploitation-and-exploration-in-machine-learning