1. Introduction
Gobang, also known as gomoku, or five-in-a-row, or renju, is a two-player strategy board game. One player uses black stones and can play first. Another player uses white stones and has to move after the first player. Then they play in turn. The one who firstly forms a straight line of five stone in same color will win. In the early stage, the board was with 19 rows and 19 columns just like a go board. However, the big board made Black has more advantages over White [1]. As a result, nowadays, the standard chessboard is composed of 15 horizontal and vertical parallel lines with equal distance, which means there are 225 nodes to move, providing a good condition for artificial intelligence to deep learn.
The Gobang has a long history. In the middle of 19th century, some books related to gobang had existed in Japan. And the earliest record of gobang in Chinese literature appeared in the late 19th century. In A.D.1988, Renju International Federation was established, which means Gobang went global [2]. And even now, Gobang is still the one of most popular board games in the world, especially in Asia.
With the development of artificial intelligence, more and more solutions and algorithms of gobang appeared. Firstly, game tree, which uses a diagram like a tree to demonstrate moves of players in order, was a common way to predict game results. Cooperating with the evaluation-function, a way setting up a function to judge the importance of different moves in the game, an AI which was able to choose a better move to win was designed. However, as we all known, players won’t just consider the present situation. A sophisticated player often considers the future of the situation, so AI has to increase the depth and make a comprehensive consideration. To beat the sophisticated opponent, the minimax search was proposed. It was a way to assess moves under the condition that the opponent always choose optimal solution. With the enough depth, this way can increase the win rate of gobang AI. Nonetheless, game tree had a problem that it was too hard and slow to iterate over all the results. To improve the efficiency and feasibility, more useful ways were founded. The algorithm called Alpha-Beta pruning algorithm was typical. It optimized the minimax search by not calculating a part of nodes which won’t be choice because of the exist of a more optimal solution. This way decreased the amount of calculation to a certain degree, raising the calculating speed and reducing the workload. Nowadays, more novel algorithms like genetic algorithm, Monte Carlo Tree Search and so on are appearing, which can improve the efficiency eve further.
The remainder of the chapter is organized as follows. First, this study will describe the basic information of gobang and the development and features of four algorithms in Section 2. Then in Section 3, the results of algorithms were demonstrated through specific data and discussed. Finally, Section 4 gives the chapter summary and presents conclusions extracted from the solutions discussed here.
2. Method
2.1. Introduction for Gobang game
2.1.1. Versions With the development of gobang, there are some variants of Gobang exists nowadays, including Free-style, Standard, Renju and even professional version.
In Free-style version, it is not necessary to pursue the line of exact five stones, which means even when getting a row having over 5 consecutive same color stones, the play also can win.
In standard version, players have to pursue the line of exact five stones and the row having over 5 consecutive same color stones cannot be regarded as win.
In Renju version, there is no restriction for White, while Black is restricted to reduce its advantage. If Black forms a line of over five stones or form a so-called double three or double four, Black will lose the game.
In Profession version, the restriction becomes more complicated. Black has to put his first move on the center of the chessboard. The second move of White has to be put on the 8 nodes which are next to the black’s first move. Then the second move of Black must be outside the set of 5◊5 squares centered by black's first stone.
And there are many other minor versions, but they won’t be mentioned in this article. This article only discusses these more common versions.
2.1.2. Main module Nowadays, the most common module of gobang is the m, n, k-game. It is a module that two players using two color stones respectively move in turn on a board with m rows and n columns and who firstly forming k stones in a row will win. For example, the famous Tic-tac-toe chess is a typical 3,3,3-game. Similarly, some specific versions of gobang can be saw as 15,15,5-game. And according to some existed studies and theories, the second player cannot win in a m, n, k-game [3]. In 1994, Allis had proved that free-style and standard gobang can make sure that the first player which always is the black side can win through db-search and pn-search [4]. Then, in 2001, Wágner, using the game tree and taking 9000 hours by a Pentium 200 MHz, proved that even in Renju version, the first player also can be sure to win [5]. After the first player was proved to win certainly, the gobang AI aimed at being more efficient and having higher win rate.
2.2. Algorithms
2.2.1. Minimax search In 1950, Shannon designed a program of the minimax search to play chess. He constructed an evaluation function to assess each move of two players, then the opponent's strongest move could be predicted and responded [6]. Although this way has the problems including the long calculating time and depth is not enough, it was a reasonable way for board games and many gobang AI were using the minimax search. After its proposal, its improvements emerged too. Alpha-beta pruning is a typical example.
2.2.2. Alpha-beta pruning In minimax search, many unnecessary data exist resulting in the low calculating speed. To avoid these data and reduce the searching space, an algorithm like alpha-beta pruning is necessary. It would not considerate some moves and their changes because the opposite will choose a better choice. In 1961, Edwards et al. proposed the alpha-beta heuristic [7]. Alpha-beta pruning is still chosen for the gobang AI algorithm nowadays.
2.2.3. Genetic algorithm The genetic algorithm was proposed by Holland in 1992 [8]. This algorithm references the laws of natural biological evolution. Its module searched the optimal solution by simulating the natural evolution process. Wang et al. applied it to the gobang game and got considerable results [9].
2.2.4. Monte carlo tree search In the middle of 20th century, the monte carlo method, which was a numerical calculation method guided by probability statistical theory, was designed by Metropolis [10]. Then in the early of 21st century, the monte carlo method was applied in the board game, go, by Coulom, and he described the application in game tree as monte carlo tree search [11]. The fact that the go AI beat many human professional players showed that this application was succeed. The success like that also attracted many researchers to follow and apply it in other areas. For example, Li et al. designed a game model for gomoku, which was equal to gobang, based on deep learning and monte carlo tree search and got some results [12].
3. Result and discussion
Some data of different gobang algorithms including minimax search, alpha-beta pruning and genetic algorithm from references are showed and compared in Table 1 [9, 13].
From the data in Table 1, it can be found that numerous nodes needed calculation will exist to find the best move if the AI only use game tree and minimax search to reach a considerable depth and one study mentioned that a Pentium 200 MHz machine can only search 400 nodes every second which means the process to find the best move is very long [5]. When the alpha-beta pruning was chosen, the number of search nodes will reduce significantly to about 2.2E+07 and a part of wasted time will be withdrawn. The genetic algorithm performs better. It just needs about 1.00E+04 search nodes to reach similar depth. Additionally, according to the study, it can choose the prime move by only costing 0.6 seconds in each turn which means the genetic algorithm is an enough powerful way for gobang AI [9]. And nowadays, monte carlo tree search, which had got huge success and beat many human professional players of go, was applied in gobang. In the study, Li et al. collected 53 different Internet models and let them play against each other to deep learning through monte carlo tree search. At last, in the version 53 vs 1, the black can win 100 times with no draw and lose [12]. With the neural networks built on Tensorflow and Keras, the gobang AI of monte carlo tree search can realize being stronger through numerous self-play games like the alpha-go, which increase the win rate of gobang AI effectively.
Because of the smaller board and simpler rules compared with go, gobang AI has reached an enough high win rate and fast calculating speed. So, researchers pay more attention on other aspects. For example, Lin et al. designed an online gobang game depending on MFC Socket to spread gobang to more people [14]. And Li et al. used machine learning to design a gobang recommendation system to promote the learning of gobang [15].
4. Conclusion
In this work, different algorithms were mentioned to introduce the development of gobang AI. Game tree, minimax search, alpha-beta pruning, genetic algorithm and monte carlo tree search were applied in gobang AI one after another. According to the results of these algorithms and the comparison among them, it could be found that the early algorithms like game tree and minimax search had many problems. With the improvements of algorithm like alpha-beta pruning, genetic algorithm and monte carlo tree search, the calculating speed and win rate became better and better. The genetic algorithm and monte carlo tree search may be good choices for now for AI to play gobang. In the future, it can be believed that better algorithms will appear with higher speed and win rate, and these algorithms will be applied in more fields.
References
[1]. Sakata G et al. 1981 Five-in-a-row Renju: For Beginners to Advanced Players Ishi Press
[2]. Wikipedia 2022 Gomoku https://en.wikipedia.org/wiki/Gomoku
[3]. Wikipedia 2022 Mnk-game https://en.wikipedia.org/wiki/Mnk-game
[4]. Allis L V 1994 Searching for solutions in games and artificial intelligence Wageningen: Ponsen & Looijen
[5]. Wágner J et al. 2001 Solving renju ICGA journal 241 30-35
[6]. Shannon C E 1950 XXII Programming a computer for playing chess The London Edinburgh and Dublin Philosophical Magazine and Journal of Science 41314 256-275
[7]. Edwards D J 1961 The alpha-beta heuristic DSpace@MIT
[8]. Holland J H 1992 Adaptation in natural and artificial systems: an introductory analysis with applications to biology control and artificial intelligence MIT press
[9]. Wang J et al. 2014 Evolving Gomoku solver by genetic algorithm 2014 IEEE Workshop on Advanced Research and Technology in Industry Applications (WARTIA)
[10]. Metropolis N et al. 1949 The monte carlo method Journal of the American statistical association 44247 335-341
[11]. Coulom Rémi 2009 The monte-carlo revolution in go The Japanese-French Frontiers of Science Symposium (JFFoS) Roscoff France Vol 115
[12]. Li Xiali et al 2019 A game model for Gomoku based on deep learning and monte carlo tree search Chinese Intelligent Automation Conference Springer Singapore
[13]. Zheng J L 2019 Research and Implementation of Gobang Intelligent Game Algorithm Based on Minimax Search and Alpha Beta pruning Algorithm Journal of Wenzhou University (Natural Science Edition) 4003 53-62
[14]. Lin Y M 2013 Design and Implementation of Gobang Online Game Based on MFC Socket Applied Mechanics and Materials Vol 380 Trans Tech Publications Ltd
[15]. Li M Z 2021 Research on the Recommendation System of Gobang Game Based on Machine Learning The 2nd International Conference on Computing and Data Science
Cite this article
Shen,S. (2023). The development of computer-based algorithms based on gobang game. Applied and Computational Engineering,4,284-288.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Volume title: Proceedings of the 3rd International Conference on Signal Processing and Machine Learning
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).
References
[1]. Sakata G et al. 1981 Five-in-a-row Renju: For Beginners to Advanced Players Ishi Press
[2]. Wikipedia 2022 Gomoku https://en.wikipedia.org/wiki/Gomoku
[3]. Wikipedia 2022 Mnk-game https://en.wikipedia.org/wiki/Mnk-game
[4]. Allis L V 1994 Searching for solutions in games and artificial intelligence Wageningen: Ponsen & Looijen
[5]. Wágner J et al. 2001 Solving renju ICGA journal 241 30-35
[6]. Shannon C E 1950 XXII Programming a computer for playing chess The London Edinburgh and Dublin Philosophical Magazine and Journal of Science 41314 256-275
[7]. Edwards D J 1961 The alpha-beta heuristic DSpace@MIT
[8]. Holland J H 1992 Adaptation in natural and artificial systems: an introductory analysis with applications to biology control and artificial intelligence MIT press
[9]. Wang J et al. 2014 Evolving Gomoku solver by genetic algorithm 2014 IEEE Workshop on Advanced Research and Technology in Industry Applications (WARTIA)
[10]. Metropolis N et al. 1949 The monte carlo method Journal of the American statistical association 44247 335-341
[11]. Coulom Rémi 2009 The monte-carlo revolution in go The Japanese-French Frontiers of Science Symposium (JFFoS) Roscoff France Vol 115
[12]. Li Xiali et al 2019 A game model for Gomoku based on deep learning and monte carlo tree search Chinese Intelligent Automation Conference Springer Singapore
[13]. Zheng J L 2019 Research and Implementation of Gobang Intelligent Game Algorithm Based on Minimax Search and Alpha Beta pruning Algorithm Journal of Wenzhou University (Natural Science Edition) 4003 53-62
[14]. Lin Y M 2013 Design and Implementation of Gobang Online Game Based on MFC Socket Applied Mechanics and Materials Vol 380 Trans Tech Publications Ltd
[15]. Li M Z 2021 Research on the Recommendation System of Gobang Game Based on Machine Learning The 2nd International Conference on Computing and Data Science