1. Introduction
Nowadays, the process of original paintings and animation consumed countless efforts, especially on the NPC and scenes that is not the core. Although these works are of significance indeed, they are not that creative. One the one hand, this reduces development efficiency, one the other hand, it diverts a significant amount of energy from some small development teams, this makes them hard to develop outstanding games. So, the development teams could cooperate human creativity and generative AI on those non-core contents selectively to help developers focus on the creative assignments in order to increase working efficiency and quality.
The advantage of generative AI is that they could extract data from the data base according to the prompt and rearrangement them into new contents. In the recent years, the emphasis of generative AI is to generate image, words and animation. This corresponds to the issues encountered in the development of game. Here are the present research. Nele Fee Bonn researched how to use the generative algorithms to design characters’ original paintings, animation, models, chats, keep a balance among each gameplay mods and anti-cheat in game[1]. JaeJunLee and his team researched how generative AI return the initiative to the designers. In addition, generate an original painting by AI and assign artists edit them into the forms the teams actually need, this wouldn’t lose the original art style due to the problem of communication or change the structure of team because of the involvement of the market research department[2]. Davinder Singh and his team studied the difference of the performance of two AI modules which based on two different generative algorithms[3]. Moreover, Yasuo Kawai studied the combination of human and AI, and built a FPS game project in a 3D city simulator, discovered that combine human creativity with AI could increase efficiency but also should maintain a balance between them[4].
This paper aimed at the application of generative algorithms in games, also the difference of AI tools’ performance based on them and use them in game better, increase the quality and efficiency of game development. And it could also extend into other fields. This paper includes abstract, introduction, comparison of typical technology, challenges and future vision and reference lists. The comparative analysis of typical technologies, case studies, challenges, and future prospects are the main parts of this article.
2. The comparative analysis of typical technologies
Artificial intelligence is a technology that uses computers to simulate human intelligence. Among them, generative artificial intelligence, as the currently most popular subfield of it, are naturally made progress rapaidly, evolving from single text generation to multimodal capabilities including images, voice, and even video. The foundation of these developments is the advancement of generative algorithms, the most representative beings are neural network algorithms. Most of its application in gaming are drawing and modeling; therefore, this article focuses on analyzing generative adversarial networks (GANs) that excel in image and model generation, as well as variational autoencoders (VAEs).
2.1. Comparison of underlying technical implementations
Generative adversarial networks and variable autoencoders are good at finishing the tasks of image and model generate, and this paper studies the generative algorithms that should to be good at generating images and models, and these two algorithms are also the algorithms selected by many AI drawing tools, and the reliability and quality are guaranteed, so the author selected these two algorithms this time.
GAN consists of a generator and a discriminator, the generator was built for extracting data from the database and generating content according to the input requirements, and the discriminator has a set of built-in or input standards to determine whether the generated content meets the standard, which is suitable for the generation of zero-sum games, so the generated content of the generator can be continuously optimized and finally meets the standard, so it is called a generative adversarial network [5]. GANs can generate high-quality samples, could ensure the stability and quality of image generated, and are good at computational complex datasets, but they need to be carefully evaluated in a specific context to meet the criteria of authenticity, and its performance is very dependent on the diversity of training data and algorithm output, and it needs to be trained and optimized in different scenarios [1, 6].
VAE technology is a self-supervised neural network that learns how to encode an input to a lower dimensionality and then decodes and reconstructs the data for many times to get close to the input as efficiently as possible, making it suitable for deep adversarial learning [5]. It is the encoder-decoder architecture, it is unsupervised learning, so it does not require labeled data, and due to its specific architecture, it is very valuable for the same type of tasks, which also leads to the fact that it can only identify data similar to the training set, which is very ineffective for input data that is different from the training data, and it is also lossy and it will lost some data [1, 7].
2.2. Test the performance of drawing tools under generating game scenes
Typically, one significant reason independent game studios and individual developers spend a tremendous amount of resources on game development is the necessity to hire a large number of artistic staff for character, scene and animation design. To reduce these expenses while granting game developers greater initiative, two drawing tools, Midjourney and DALL-E, have been introduced to compare their performance on the same tasks[2, 3].
All parameters are input by users, including time, space, and theme. DALL-E use a diffusion model to generate images from text; it is a discrete variational autoencoder (Discrete VAE) employing a transformer architecture for text decoding and a U-Net architecture to convert text into intermediate descriptive terms, then turn it into detailed context, producing images at last, which is a relatively straightforward approach. Midjourney, on the other hand, generates images using Generative Adversarial Networks (GANs). Midjourney can avoid repetitive iterations in image generation thanks to its unique feature, Costume-Zoom. Under default settings for all parameters, the image quality generated by Midjourney surpasses that of DALL-E 3, particularly when generating images that meet specific game scene requirements[6, 8]. However, in terms of speed, Midjourney is significantly slower than DALL-E; DALL-E averages 21. 51 seconds to generate a single image, while Midjourney averages 143. 31 seconds per image, considerably lagging behind the former[3]. For the cost, Midjourney is also more expensive than DALL-E. The cost per image generated by Midjourney is $0. 09, while for DALL-E it ranges from $0. 04 to $0. 08[3]. Clearly, the lower cost characteristic of DALL-E makes it more suitable for low-budget projects or individual developers[3]. Midjourney extracts keywords directly from prompts to generate models, while DALL-E requires first generating a two-dimensional conceptual image before creating a three-dimensional model, all of these leading to higher player satisfaction for DALL-E compared to Midjourney[4].
3. Typical case studies
Based on the algorithmic principles and development based on generative algorithms, this chapter focuses on the application of generative algorithms and these tools to specific game projects. The application projects include an animation using DALL-E3 and Midjourney combined with the unity 2d engine, generating animated actions using an improved variable autoencoder and continuously optimizing it, using a generative adversarial network to generate thumbnail icons for the game, and generating different actions for a specific character and continuously modifying and optimizing the effects of the generative model, also, the neural network based model is not only used for drawing, but also can be used as a robotic player to play with the human player to increase the immersion of the player, and provide a simulated real environment for the player to test [2, 4, 9, 10, 11].
In the PLATEAU urban model experiment based on the Unity 2D engine, the role design utilized DALL-E technology to first produce concept art for characters, which was then 3D modeled based on this art; this can be applied not only in games but also in urban simulations and other fields. It was also found that using ChatGPT and Claude for code generation is fast and efficient[4]. Additionally, Midjourney was employed in conjunction with the Unity engine to generate 2D animations, although it seems that for certain commonly used perspectives in games, the results are not as satisfactory as hand-drawn methods[4]. This approach also indicated that utilizing DALL-E for urban simulations is suitable for FPS genres[4].
Using VAE to generate animations of existing characters. The generation conditions are an initial reference image and a specified pose, followed by the collection of data from a designated database, divided into two datasets: one serves as the generation material and the other as the evaluation standard[9]. The key components of the model include three elements: ReferenceNet, PoseGuider, and MotionModule[9]. The training process encompasses two aspects: from pose to image and from pose to character[9].
The game icon is an abstract representation of a specific type of gaming action or activity, significantly impacting the player's gaming experience[10]. However, due to its strong abstractness and generality, composing such icons is also very time-consuming. Using GANs to generate these types of abstract icons can greatly simplify the work[10]. The data mainly from the Unity store, where materials are first categorized based on features, and were all standardized[10]. Parameters have also been adjusted and supervised training has been conducted for the GAN network[10]. Finally, the generated images are modified to meet the required standards[10]. However, the study found that the large volume of generated data presents challenges in processing data and training models[10].
In a game project called "NOX" MidJourney image generation technology was utilized, allowing for the depiction of characters from various perspectives based on given prompts, while also enabling the setting of various actions[2]. This approach significantly increased efficiency and maintained stylistic consistency, thereby permitting the production team to uphold both stylistic coherence and project freedom without being disturbed by the market research department[2]. However, it is essential to note that this could not work without the modifications and optimizations from the artists[2]. In contrast, another concurrent project named "The Walking Dead" did not employ AI drawing technology, and involved more artists than the NOX project. Consequently, more time was required for communication with the artists, which initially affected efficiency and could lead to the loss of the original art style due to personnel turnover or reassignment[2]. Furthermore, this situation would only serve to allow the market research department to interfere more in the production team's decision-making[2].
Neural networks are not only used for the production of drawings or animations; they can also serve as robotic players integrated into games, allowing them to play alongside human players[11]. It can provide players with a testing environment that closely resembles the real game environment[11]. Research conducted by Kim, Munyeong, and others has found that in a human-computer practice scenario, ChatGPT demonstrated excellent performance in the game Skyfall due to its outperformed understanding and decision-making capabilities[11]. However, there appear to be situational limitations, as ChatGPT performs worse when portraying non-spy players compared to when it acts as spies[11]. Specifically, it may inadvertently offer clues to spy players, which could be attributed to inadequacies in non-verbal cues and certain rule representations[11]. This manifests as rules that human players can understand but are beyond the understanding of the AI, suggesting a language processing issue[11].
4. Challenges and the future
Firstly, when using similar tools to complete work, it is still necessary to have the domain knowledge of the field, as AI-generated content is only for reference and cannot be directly applied. Users should treat the generated material as a basis to modify it into the required style[8]. Secondly, in certain scenarios, there are expressions that are difficult for artificial intelligence to comprehend, such as metaphors, analogies, or common phrases, which can result in decreased model accuracy and unanticipated expression outcomes[11]. Moreover, using generated materials may lead to excessively large game sizes, a lack of engagement in the gameplay, and issues regarding the computational power of players and game developers. Furthermore, excessive enhancement of game immersion through AI usage could lead to addiction and a series of related problems which might exacerbate public bias against generative artificial intelligence[1]. Regarding content quality, using AI-generated content cannot guarantee quality and there may also be privacy and copyright issues, as the AI database may contain copyrighted materials, leading to unnecessary complications for producers. Additionally, most players currently strongly resist the presence of AI-generated content in games, feeling that such content does not resonate emotionally and contributes to fatigue. They believe that the creation of game content should return to unique creative experiences, enabling players to establish a deeper connection with creators. Some players are even leading campaign against related gaming companies and their products. Furthermore, some related gaming platforms have regulations stating that the use of generative AI must be disclosed; otherwise, the game may not be published.
In the future, generative tools that will be applied on personal devices should develop towards being lightweight, reducing the size of databases while improving the accuracy of their responses, thereby enabling the application of this technology in areas beyond gaming[11]. During data processing, data should first undergo desensitization, and relevant laws and regulations should be improved[11]. Models should evolve towards diversification[11].
In the future, generative artificial intelligence will have more in-depth applications in the gaming industry, such as using it for voice acting of game characters instead of relying on the original sound library, and providing personalized and real-time responses based on player actions, thereby decentralizing more authority to the players.
5. Conclusion
This paper studied the application of generative algorithms, particularly neural network algorithms applied in gaming. It focuses on their roles in drawing, animation, modeling, and as robotic players enhancing player engagement. Additionally, it compares the practical performance of two AI drawing tools based on neural network algorithms in the same scenario, employing a control variable method. The study also finds that the parameter settings, model adjustments, and data preprocessing of generative adversarial networks are crucial for the quality of the generated content. In the future should compare additional algorithms, such as convolutional neural networks, and explore generative AI tools based on them. Moreover, it should incorporate aspects beyond drawing, such as deeper learning attributes and their intelligence applications in gaming to emphasize the interaction between games and players. This research contributes to the effective utilization of generative AI technology in gaming while improving algorithms to enhance efficiency and advance the intelligence of games.
References
[1]. Nele Fee Bonn. The impact of integrating Artificial Intelligence into the video games industry. 2023. https: //kth. diva-portal. org/smash/get/diva2: 1820226/FULLTEXT01. pdf
[2]. JaeJun Lee1, So-Youn Eom2 and JunHee Lee1. EMPOWERING GAME DESIGNERS WITH GENERATIVE AI. International Journal on Computer Science and Information Systems. Vol. 18, No. 2, pp. 213-230. 2021
[3]. Davinder Singh, Joideep Banerjee, Lionel Jayaraj. 2024 IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE) P946-P951| 979-8-3503-7800-9/24/©2024 IEEE | DOI: 10. 1109/METROXRAINE62247. 2024. 10796123. 2024
[4]. Yasuo Kawai. Using Generative AI for Game Development Subject to Technical Constraints. 2024 International Conference on Cyberworlds (CW) | 979-8-3315-2717-4/24/$31. 00 ©2024 IEEE | DOI: 10. 1109/CW64301. 2024. 00081 2024
[5]. Y. Yang, H. Du, G. Sun, Z. Xiong, D. Niyato and Z. Han, "Exploring Equilibrium Strategies in Network Games with Generative AI, " in IEEE Network, doi: 10. 1109/MNET. 2024. 3521887.
[6]. Jiajia Su and Zhongjun He. 2025. Enhancing User Experience Evaluation of Graphic Art Style Games through Collaboration with Generative AI. In Proceedings of the 2024 5th International Conference on Computer Science and Management Technology (ICCSMT '24). Association for Computing Machinery, New York, NY, USA, 31–38.
[7]. Balagopal Ramdurai. The Impact, Advancements and Applications of Generative AI. 2023
[8]. Qiu, S. (2023). Generative AI Processes for 2D Platformer Game Character Design and Animation. Lecture Notes in Education Psychology and Public Media, 29, 146-160.
[9]. Cheng-An Hsieh, Jing Zhang, Ava Yan. Sprite Sheet Diffusion: Generate Game Character for Animation. 2025
[10]. Rafal Karp, Zaneta Swiderska-Chadaj. Automatic generation of graphical game assets using GAN. 2021
[11]. Kim, Munyeong, and Kim, Sungsu. "Generative AI in Mafia-like Game Simulation. " arXiv preprint arXiv: 2309. 11672, 2023.
Cite this article
Zhou,S. (2025). The Analysis of Generative Algorithms Applied In Games. Applied and Computational Engineering,157,154-159.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Volume title: Proceedings of CONF-CDS 2025 Symposium: Data Visualization Methods for Evaluatio
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).
References
[1]. Nele Fee Bonn. The impact of integrating Artificial Intelligence into the video games industry. 2023. https: //kth. diva-portal. org/smash/get/diva2: 1820226/FULLTEXT01. pdf
[2]. JaeJun Lee1, So-Youn Eom2 and JunHee Lee1. EMPOWERING GAME DESIGNERS WITH GENERATIVE AI. International Journal on Computer Science and Information Systems. Vol. 18, No. 2, pp. 213-230. 2021
[3]. Davinder Singh, Joideep Banerjee, Lionel Jayaraj. 2024 IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE) P946-P951| 979-8-3503-7800-9/24/©2024 IEEE | DOI: 10. 1109/METROXRAINE62247. 2024. 10796123. 2024
[4]. Yasuo Kawai. Using Generative AI for Game Development Subject to Technical Constraints. 2024 International Conference on Cyberworlds (CW) | 979-8-3315-2717-4/24/$31. 00 ©2024 IEEE | DOI: 10. 1109/CW64301. 2024. 00081 2024
[5]. Y. Yang, H. Du, G. Sun, Z. Xiong, D. Niyato and Z. Han, "Exploring Equilibrium Strategies in Network Games with Generative AI, " in IEEE Network, doi: 10. 1109/MNET. 2024. 3521887.
[6]. Jiajia Su and Zhongjun He. 2025. Enhancing User Experience Evaluation of Graphic Art Style Games through Collaboration with Generative AI. In Proceedings of the 2024 5th International Conference on Computer Science and Management Technology (ICCSMT '24). Association for Computing Machinery, New York, NY, USA, 31–38.
[7]. Balagopal Ramdurai. The Impact, Advancements and Applications of Generative AI. 2023
[8]. Qiu, S. (2023). Generative AI Processes for 2D Platformer Game Character Design and Animation. Lecture Notes in Education Psychology and Public Media, 29, 146-160.
[9]. Cheng-An Hsieh, Jing Zhang, Ava Yan. Sprite Sheet Diffusion: Generate Game Character for Animation. 2025
[10]. Rafal Karp, Zaneta Swiderska-Chadaj. Automatic generation of graphical game assets using GAN. 2021
[11]. Kim, Munyeong, and Kim, Sungsu. "Generative AI in Mafia-like Game Simulation. " arXiv preprint arXiv: 2309. 11672, 2023.