
Scene Modeling in Game Development Based on Generative Adversarial Networks
- 1 College of mathematics and informatics, South China Agricultural University, Guangzhou, China
* Author to whom correspondence should be addressed.
Abstract
Due to booming advance on generative models, people have great interest on designing model structure to produce wonderful pictures, or even 3d shapes. The motivation of this work is that the 3d modeling manufacturing in game development is still challenging and time-consuming, and 3d shape produced from generative models may be powerful tools for the problem. The work tried to build game scenes, such as a city and a cave, the typical scene that requires many random but similar objects. This paper aims to explore a complete workflow for applying the GANs to the game development. The structure of the paper is introducing the background of the game development and the progress of the generative model, giving interpretation about the principle of Generative Adversarial Networks, and propose the process on how to utilize it to improve the productivity in developing game scene. Finally, this work found that it could considerably reduce the repetitive work on making massive and similar objects.
Keywords
Scene modeling, game development, generative adversarial networks.
[1]. Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial networks. ArXiv. https://arxiv.org /abs/1406.2661
[2]. Wu, R., & Zheng, C. (2022). Learning to generate 3D shapes from a single example. ACM Transactions on Graphics, 41(6), Article 224, 19 pages.
[3]. Shaham, T. R., Dekel, T., & Michaeli, T. (2019). SinGAN: Learning a generative model from a single natural image. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV) (pp. 4569-4579).
[4]. Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., & Courville, A. (2017). Improved training of Wasserstein GANs. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS'17) (pp. 5769–5779).
[5]. Wang, T. -C., Liu, M. -Y., Zhu, J. -Y., Tao, A., Kautz, J., & Catanzaro, B. (2018). High-resolution image synthesis and semantic manipulation with conditional GANs. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 8798-8807).
[6]. Kerbl, B., Kopanas, G., Leimkühler, T., & Drettakis, G. (2023). 3D Gaussian splatting for real-time radiance field rendering. ACM Transactions on Graphics, 42(4), Article 139.
[7]. Karras, T., Aila, T., Laine, S., & Lehtinen, J. (2019). A style-based generator architecture for generative adversarial networks. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV) (pp. 4401-4410).
[8]. Jiang, Z., Yu, Y., & He, Z. (2020). Taming transformers for high-resolution image synthesis. In Proceedings of the 37th International Conference on Machine Learning (ICML) (Vol. 119, pp. 882-893).
Cite this article
Liu,H. (2024). Scene Modeling in Game Development Based on Generative Adversarial Networks. Applied and Computational Engineering,110,36-41.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Volume title: Proceedings of CONF-MLA 2024 Workshop: Securing the Future: Empowering Cyber Defense with Machine Learning and Deep Learning
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).