
Dynamic difficulty adjustment using deep reinforcement learning: A review
- 1 College of Computer Science and Software Engineering, Hohai University, China
* Author to whom correspondence should be addressed.
Abstract
With the evolution of the gaming industry, there is an increasing demand for games that offer immersive experiences beyond basic gratification. Traditional games with fixed difficulty levels often fail to cater to the diverse preferences and skills of players. To address these needs, dynamic difficulty adjustment (DDA) has emerged as a crucial element in game design. This review explores the application of Deep Reinforcement Learning (DRL) in achieving DDA, contrasting it with traditional methods. By examining various DRL algorithms and their effectiveness, as well as evaluating traditional models, this review identifies the potential of DRL in enhancing player experiences and outlines existing challenges. It also highlights future research directions, including the integration of DRL with Flow Theory and innovative evaluation methods. The goal is to provide a comprehensive overview of how DRL can advance game design and improve player satisfaction.
Keywords
Dynamic Difficulty Adjustment (DDA), Deep Reinforcement Learning (DRL), Game Design, Flow, Player Experience
[1]. Mohammed, M., Khan, M. B., & Bashier, E. B. M. (2016). Machine learning: algorithms and applications. CRC Press.
[2]. Koster, R. (2014). A Theory of Fun for Game Design. O'Reilly Media.
[3]. Hunicke, R. (2005). The case for dynamic difficulty adjustment in games. Proceedings of the ACM SIGCHI International Conference on Advances in Computer Entertainment Technology (ACE ’05), June 2005, Valencia, Spain, 429–433.
[4]. Csikszentmihalyi, M. (2009). Flow: The Psychology of Optimal Experience. Harper Row.
[5]. Chen, J. (2007). Flow in Games (and Everything Else). Communications of the ACM, 50, 31–34.
[6]. Koster, R. (2014). A Theory of Fun for Game Design. O'Reilly Media.
[7]. Schell, J. (2008). The Art of Game Design: A Book of Lenses. CRC Press.
[8]. Zohaib, M. (2018). Dynamic Difficulty Adjustment (DDA) in Computer Games: A Review. Advances in Human-Computer Interaction, 1–12.
[9]. Khajah, M. M., Roads, B. D., Lindsey, R. V., Liu, Y.-E., & Mozer, M. C. (2016). Designing engaging games using Bayesian optimization. Proceedings of the 34th Annual Conference on Human Factors in Computing Systems (CHI 2016), May 2016, San Jose, CA, USA, 5571–5582.
[10]. Romero-Mendez, E. A., Santana-Mancilla, P. C., Garcia-Ruiz, M., Montesinos-López, O. A., & Anido-Rifón, L. E. (2023). The Use of Deep Learning to Improve Player Engagement in a Video Game through a Dynamic Difficulty Adjustment Based on Skills Classification. Applied Sciences, 13, 8249.
[11]. Sutanto, K., & Suharjito, S. (2014). Dynamic difficulty adjustment in games based on type of player with ANFIS method. Journal of Theoretical and Applied Information Technology, 10, 254–260.
[12]. Huber, T., Mertes, S., Rangelova, S., Flutura, S., & André, E. (2021). Dynamic Difficulty Adjustment in Virtual Reality Exergames through Experience-driven Procedural Content Generation. 2021 IEEE Symposium Series on Computational Intelligence (SSCI), Orlando, FL, USA, 1–8. doi: 10.1109/SSCI50451.2021.9660086
[13]. Wang, H., Gao, Y., & Chen, X. (2010). DRL-dot: A reinforcement learning NPC team for playing domination games. IEEE Transactions on Computational Intelligence and AI in Games, 2(1), 17–26.
[14]. Wender, S., & Watson, I. (2008). Using reinforcement learning for city site selection in the turn-based strategy game Civilization IV. 2008 IEEE Symposium on Computational Intelligence and Games (CIG), 372–377.
[15]. Sutton, R. S., & Barto, A. G. (2018). Reinforcement Learning: An Introduction. MIT Press.
[16]. Climent, L., Longhi, A., Arbelaez, A., & Mancini, M. (2024). A framework for designing Reinforcement Learning agents with Dynamic Difficulty Adjustment in single-player action video games. Entertainment Computing, 50, 100686.
[17]. Kaidan, M., Harada, T., Chu, C. Y., & Thawonmas, R. (2016). Procedural generation of Angry Birds levels with adjustable difficulty. 2016 IEEE Congress on Evolutionary Computation (CEC), 1311–1316.
[18]. Ferreira, L., & Toledo, C. (2014). A search-based approach for generating Angry Birds levels. Computational Intelligence & Games, IEEE.
[19]. Cui, G., Shen, R., Chen, Y., Zou, J., Yang, S., Fan, C., & Zheng, J. (2020). Reinforced Evolutionary Algorithms for Game Difficulty Control. In Proceedings of the 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence (pp. 1–7).
[20]. Bai, H., Cheng, R., & Jin, Y. (2023). Evolutionary reinforcement learning: A survey. Intelligent Computing, 2, 0025.
[21]. Mnih, V., Kavukcuoglu, K., Silver, D., et al. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529.
[22]. Silva, M. P., Silva, V. do N., & Chaimowicz, L. (2017). Dynamic difficulty adjustment on MOBA games. Entertainment Computing, 18, 103–123.
[23]. Andrade, G., Ramalho, G., Santana, H., & Corruble, V. (2005). Extending reinforcement learning to provide dynamic game balancing. Proceedings of the Workshop on Reasoning, Representation, and Learning in Computer Games, 19th International Joint Conference on Artificial Intelligence (IJCAI), Edinburgh, United Kingdom, 7, 12.
[24]. Koriat, A., & Bjork, R. A. (2006). Metacognition and the monitoring of decoding processes.
[25]. Guo, Z., Thawonmas, R., & Ren, X. (2024). Rethinking dynamic difficulty adjustment for video game design. Entertainment Computing, 50, 100663.
[26]. Fisher, N., & Kulshreshth, A. K. (2024). Exploring Dynamic Difficulty Adjustment Methods for Video Games. Virtual Worlds, 3(2), 230–255.
[27]. Ozkul, F., Palaska, Y., Masazade, E., & Erol-Barkana, D. (2019). Exploring dynamic difficulty adjustment mechanism for rehabilitation tasks using physiological measures and subjective ratings. IET Signal Processing, 13, 378–386.
Cite this article
Zheng,T. (2024). Dynamic difficulty adjustment using deep reinforcement learning: A review. Applied and Computational Engineering,71,157-162.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Volume title: Proceedings of the 6th International Conference on Computing and Data Science
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).