A review of artificial intelligence in video games: From preset scripts to self-learning

Research Article
Open access

A review of artificial intelligence in video games: From preset scripts to self-learning

Junze Zhu 1*
  • 1 NingBo BinHai International Cooperative School    
  • *corresponding author 2372239471@qq.com
Published on 22 March 2024 | https://doi.org/10.54254/2755-2721/49/20241083
ACE Vol.49
ISSN (Print): 2755-273X
ISSN (Online): 2755-2721
ISBN (Print): 978-1-83558-343-2
ISBN (Online): 978-1-83558-344-9

Abstract

It is now the 21st century, with the progressive development of various science and technology, such as artificial intelligence, big data, and so on, and these ever-evolving technologies have also greatly contributed to the development of today's flourishing video game field. This paper focuses on the development of artificial intelligence applications in video games over the past two decades, from preset scripts to self-learning processes, and adopts the research method of literature review. The paper concludes that the shift from pre-scripted to self-learning AI marks a shift in video games from experiences with clear rules and controlled processes to complex, dynamic, personalized experiences. This shift brings not only new opportunities but also new challenges. In the future, we can expect to see more research and practice to explore and take advantage of more of the possibilities of self-learning AI in video games.

Keywords:

Artificial Intelligence, Video Games, Deep Learning, Prescriptive Scripting, Self-Learning

Zhu,J. (2024). A review of artificial intelligence in video games: From preset scripts to self-learning. Applied and Computational Engineering,49,149-153.
Export citation

1. Introduction

The application of artificial intelligence in video games has evolved from early pre-programmed scripts to today's self-learning mechanisms. This change not only greatly enhances the interactivity and realism of games, but also opens up entirely new gaming experiences. However, there are still many open challenges and unexplored opportunities in this area, making video games an important area for AI research.

In early video games, the behavior of non-player characters (NPCs) was usually controlled by predefined rules and scripts. While this approach is simple and easy to implement, it severely limits the playability and depth of the game. Players can predict and adapt to these pre-determined behaviors, and as a result, games often lack challenge and novelty. However, with the advent of AI technology, especially deep and reinforcement learning, we have been able to train NPCs to understand and adapt to the game environment, making their behavior more natural and challenging.

Self-learning AI brings more than just a better gaming experience. Its emergence and development have also led to a change in the way video games are created and designed. The traditional game development process typically requires a lot of manpower and time to preset and test various elements of the game. With self-learning AI, however, we can automate this process, making game development more efficient and flexible.

Despite the obvious benefits of self-learning AI in video games, it also presents some new challenges. For example, how do you ensure that the AI's behavior is interesting and challenging to the player, while still obeying the rules of the game? How to implement complex deep learning algorithms with limited resources? How to deal with the moral and ethical issues that AI may raise?

This paper will explore these questions and try to provide some possible solutions. We will first review the historical development of AI in video games, then discuss in detail the specific applications and implications of pre-trained and self-learning AI in video games, and finally explore the implications and challenges that the shift from pre-trained to self-learning AI has brought to video games, and possible future trends.

2. The historical development of artificial intelligence in video games

The use of artificial intelligence in video games dates back to the early days of video games. It has evolved from simple pre-programmed scripts to today's deep and reinforcement learning, a process closely linked to the development of computer technology and advances in theoretical research.

When video games became popular in the 1980s, game AI was mainly based on fixed rules or simple decision trees, and these predefined patterns of behavior could not adapt to changes in the game environment, so the behavior of non-player characters (NPCs) tended to be uniform and predictable. However, due to the limited computing power at the time, these methods were sufficient to meet the game requirements of the time, and simple game AI was also in line with the game design philosophy of the time, which emphasized clarity of rules and controllability of the process.

As the demand for realism and complexity in video games increased, the limitations of preset scripts were gradually exposed. To enable NPCs to make more complex decisions, game developers began to introduce more sophisticated AI techniques such as fuzzy logic, genetic algorithms, and neural networks. These techniques allowed for more variation in NPC behavior and some adaptation to changes in the game environment, but their use was still limited by the complexity and computational requirements of these techniques.

In the early 21st century, new AI techniques such as deep learning and reinforcement learning began to emerge as the computational power of computers increased. These techniques allow AI to gain knowledge by learning and adapting to the environment, rather than relying on predetermined rules. This allows NPCs to behave in more natural and challenging ways, and to learn and improve over the course of the game. Applications of this type of AI in video games, such as DeepMind's AlphaGo and OpenAI's Dota 2 AI, have been remarkably successful, proving the viability and potential of deep and reinforcement learning in video games.

Today, AI is shifting from supporting game design and delivering predefined game experiences to dominating game experiences and creating new game models. The shift from pre-scripted to self-learning not only demonstrates the advancement of AI technology but also reflects the increased need for interactivity and authenticity in video games. Although the application of self-learning AI in video games is still in its infancy, it has already shown us its great potential and possibilities.

3. Preset scripting in video games

Video games are an interactive medium based on predetermined rules and complex systems. In early video games, artificial intelligence performance relied heavily on preset scripting, an approach that largely shaped the style and character of early video games.

Preset scripting is a programming technique used to define the behavior and reactions of non-player characters (NPCs) in a game. Based on predefined rules and conditions, the NPC will act accordingly. For example, an NPC may attack the player if the player approaches the NPC; an NPC may run away if the NPC's health drops below a certain level. These behaviors are preset by the developer during game production [1].

The advantage of preset scripts is their simplicity and controllability. Developers can accurately predict and control the behavior of NPCs, which makes the game design and testing process relatively easy [2]. In addition, preset scripts can also produce a consistent and reliable game experience, where players can play the game by understanding and using these preset rules. This is consistent with the design concepts of rule clarity and process controllability emphasized in early video games [3].

However, the disadvantages of preset scripts are also obvious. First, because the behavior of NPCs is based entirely on preset rules, the behavior of NPCs tends to be singular and predictable. Players can "gamify" these AIs by learning these rules and finding ways to "break" the game. Second, the inability of preset scripts to adapt to changes in the game environment limits the complexity and depth of the game. For example, in complex multiplayer online games, preset scripts often fail to provide satisfactory results due to the variability of the environment and player behavior [4].

Over the past few decades, as video games have evolved, preset scripts have shifted from a dominant tool to a support tool. Many games now make use of more sophisticated artificial intelligence techniques, such as deep learning and reinforcement learning, to control the behavior of NPCs [5]. However, preset scripts still play an important role in many games, especially those that emphasize story and character behavior [6].

Overall, the use of preset scripts in video games reflects both the limitations of AI and its impact on game design and experience. Although the use of preset scripting is somewhat limited, it remains an important tool for video game development.

4. Application of self-learning AI in video games

With the development of artificial intelligence technology, especially the emergence of deep learning and reinforcement learning, self-learning AI has begun to play a role in video games. Compared to traditional pre-programmed scripts, self-learning AI can acquire knowledge by learning and adapting to its environment, making its behavior more natural and sophisticated.

A distinguishing feature of self-learning AI is its ability to deal with complex decision-making problems. In video games, players typically need to integrate a large amount of information to make optimal decisions. This is a challenge for AI, as it must solve complex state and action spaces. However, techniques such as deep learning and reinforcement learning have demonstrated their potential in this regard [7]. Through self-learning, an AI can continuously optimize its strategies during gameplay and thus achieve high performance in complex game environments.

Another advantage of self-learning AIs is their ability to generate new content and experiences. In many games, the primary role of the AI is to provide challenge and entertainment. Through self-learning, the AI can constantly change and optimize its behavior to create new game experiences. For example, OpenAI's Dota2 AI generated many unprecedented strategies by learning against humans [8]. This novelty and unpredictability enhance the playability and longevity of the game.

However, the application of self-learning AI in video games faces challenges. First, techniques such as deep learning and reinforcement learning typically require large amounts of data and computational resources, which is a challenge for many small game developers [9]. Second, there are important design issues such as how to make the behavior of the AI interesting and rewarding for the player, and at the same time obey the rules of the game [10]. Finally, self-learning AIs may raise some moral and ethical issues, e.g., if the AI learns a strategy that the player does not like, or if the AI is used to cheat, how will this affect the fairness and fun of the game [11].

In conclusion, the use of self-learning AIs in video games demonstrates their great potential, but also brings new challenges. Future research needs to further explore how to maximize the benefits of self-learning AI while solving the problems it brings.

5. Self-learning ai in video games

When we talk about artificial intelligence in video games, we often discuss how it affects the behavior of non-player characters (NPCs). In early video games, the behavior of NPCs relied heavily on scripts predefined by the developers. The main advantage of this approach is its simplicity and controllability [1]. However, as the complexity of video games and player expectations increased, the limitations of predefined scripts became apparent. To make the behavior of NPCs richer and more unpredictable, developers began experimenting with more sophisticated AI techniques.

Self-learning AI is one of these new techniques. Through deep and reinforcement learning, AI can learn from experience and optimize its strategies [12]. This shift from pre-programmed scripts to self-learning has affected video games in many ways.

First, self-learning AI opens up new possibilities for video games. For example, the AI can provide a personalized experience by learning the player's behavior [5]. In addition, self-learning AIs can generate new game content, such as new maps or enemy strategies, thus improving the playability and longevity of the game [8].

However, the shift from pre-scripted to self-learning also brings new challenges. Self-learning AIs typically require large amounts of data and computational resources, which is a challenge for many small game developers [9]. In addition, the behavior of self-learning AIs may be too complex or unpredictable, which can affect the playability and fairness of the game [10].

Nevertheless, the potential and possibilities of self-learning AI have attracted the interest of many researchers and developers. With further research and practice, a balance can be found that maximizes the use of self-learning AI while maintaining player expectations and fairness [11].

Overall, the shift from pre-scripted to self-learning is an important trend in video game development. This shift not only demonstrates the evolution of AI technology but also reflects the need for complexity and personalized experiences in video games. In the future, we expect more research and practice to explore more possibilities of self-learning AI in video games [13].

6. Conclusion

The evolution of video games reflects not only technological advances, but also our expectations for interactive experiences. Both pre-scripted and self-learning AI are important tools for meeting these expectations, and their use in video games demonstrates the evolution and possibilities of AI technology.

The shift from preset scripts to self-learning AI marks a shift in video games from experiences with clear rules and controlled processes to experiences that are complex, dynamic, and personalized. They need to be able to handle more complex decision problems, generate more diverse content and experiences, and maximize their ability to learn, all while meeting player expectations and ensuring fair play.

The implications and challenges of this shift also provide directions for future research and practice. We need to explore more effective learning and optimization methods to cope with the complexity and variability of game environments; we need to design better evaluation and control mechanisms to ensure that AI behavior is consistent with player expectations and game rules; and we need to consider the moral and ethical issues of AI to protect player rights and game fairness.

Overall, the shift from pre-scripted to self-learning AI represents the quest for complexity and personalized experiences in video games. This shift is an ongoing process that requires not only the continued development of AI technology but also our deep understanding of game design and player experience. Only by combining the two will we be able to create truly interesting and challenging video games.

In the future, we expect to see more research and practice to explore and exploit the further possibilities of self-learning AI in video games. Whether it's to provide a deeper gaming experience or to drive innovation and development in video games, self-learning AI will be an important tool and partner. At the same time, it will inject new vitality and possibilities into the development of AI, especially in the field of games, by providing a deeper and more comprehensive understanding and application of AI.


References

[1]. Rollings, A., & Adams, E. (2003). Andrew Rollings and Ernest Adams on game design. New Riders. https://api.semanticscholar.org.

[2]. Game Design: Theory and Practice by Rouse III, Richard(2004). Electronic Industry Press. 456.

[3]. A Formal Approach to Game Design and Game Research, Zubek(2004). TY- JOUR

[4]. Hunicke, Robin,Leblanc, Marc,Zubek, Robert,(2004). MDA: A Formal Approach to Game Design and Game Research,Workshop - Technical Report.

[5]. Laird, J. E., & van Lent, M. (2000). Human-level AI’s killer application: Interactive computer games. AI magazine, 21(2), 15-15.

[6]. Yannakakis, G. N., & Togelius, J. (2018). Artificial intelligence and games. Springer.

[7]. Mateas & Stern,(2005). Procedural Authorship: A Case-Study Of the Interactive Drama Façade.

[8]. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., … & Petersen, S. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529-533.

[9]. Berner, C., Brockman, G., Chan, B., Cheung, V., Debiak, P., Dennison, C.& Klimov, O. (2019). Dota 2 with Large Scale Deep Reinforcement Learning. arXiv preprint arXiv:1912.06680.

[10]. Justesen, N., Torrado, R. R., Bontrager, P., Khalifa, A., Togelius, J., & Risi, S. (2019). Illuminating Generalization in Deep Reinforcement Learning through Procedural Level Generation. arXiv preprint arXiv:1806.10729.

[11]. Zook, A., Harrison, B., & Riedl, M. O. (2019). Monte-Carlo Tree Search for Simulation-Based Strategy Analysis. In FDG.

[12]. Laird, J. E., & Duchi, J. C. (2000). Creating human-like synthetic characters with multiple skill levels: A case study using the Soar quakebot. In AAAI/IAAI (Vol. 2000, pp. 403-408).

[13]. Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction. MIT press.


Cite this article

Zhu,J. (2024). A review of artificial intelligence in video games: From preset scripts to self-learning. Applied and Computational Engineering,49,149-153.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of the 4th International Conference on Signal Processing and Machine Learning

ISBN:978-1-83558-343-2(Print) / 978-1-83558-344-9(Online)
Editor:Marwan Omar
Conference website: https://www.confspml.org/
Conference date: 15 January 2024
Series: Applied and Computational Engineering
Volume number: Vol.49
ISSN:2755-2721(Print) / 2755-273X(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).

References

[1]. Rollings, A., & Adams, E. (2003). Andrew Rollings and Ernest Adams on game design. New Riders. https://api.semanticscholar.org.

[2]. Game Design: Theory and Practice by Rouse III, Richard(2004). Electronic Industry Press. 456.

[3]. A Formal Approach to Game Design and Game Research, Zubek(2004). TY- JOUR

[4]. Hunicke, Robin,Leblanc, Marc,Zubek, Robert,(2004). MDA: A Formal Approach to Game Design and Game Research,Workshop - Technical Report.

[5]. Laird, J. E., & van Lent, M. (2000). Human-level AI’s killer application: Interactive computer games. AI magazine, 21(2), 15-15.

[6]. Yannakakis, G. N., & Togelius, J. (2018). Artificial intelligence and games. Springer.

[7]. Mateas & Stern,(2005). Procedural Authorship: A Case-Study Of the Interactive Drama Façade.

[8]. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., … & Petersen, S. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529-533.

[9]. Berner, C., Brockman, G., Chan, B., Cheung, V., Debiak, P., Dennison, C.& Klimov, O. (2019). Dota 2 with Large Scale Deep Reinforcement Learning. arXiv preprint arXiv:1912.06680.

[10]. Justesen, N., Torrado, R. R., Bontrager, P., Khalifa, A., Togelius, J., & Risi, S. (2019). Illuminating Generalization in Deep Reinforcement Learning through Procedural Level Generation. arXiv preprint arXiv:1806.10729.

[11]. Zook, A., Harrison, B., & Riedl, M. O. (2019). Monte-Carlo Tree Search for Simulation-Based Strategy Analysis. In FDG.

[12]. Laird, J. E., & Duchi, J. C. (2000). Creating human-like synthetic characters with multiple skill levels: A case study using the Soar quakebot. In AAAI/IAAI (Vol. 2000, pp. 403-408).

[13]. Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction. MIT press.