Research on image style transfer based on deep learning

Research Article
Open access

Research on image style transfer based on deep learning

Junlin Li 1* , Chen Lin 2 , Yingbo Zhao 3
  • 1 South China Normal University    
  • 2 Zhejiang Gongshang University    
  • 3 University of Liverpool    
  • *corresponding author 20202005377@m.scnu.edu.cn
Published on 4 February 2024 | https://doi.org/10.54254/2755-2721/35/20230403
ACE Vol.35
ISSN (Print): 2755-273X
ISSN (Online): 2755-2721
ISBN (Print): 978-1-83558-295-4
ISBN (Online): 978-1-83558-296-1

Abstract

In many fields, image style transfer has been a popular topic. It also has a long history; the earliest examples date back to the previous century. A number of style transfer techniques are currently flourishing, from manual modeling to the use of neural networks. Image style transfer methods is improving and taking less time. This paper is separated into two main categories to analyze based on picture iteration and model iteration - and summarizes the various techniques of image style transfer based on deep learning in accordance with the timeline. This paper introduces maximum mean difference, Markov random field, and deep image analogy in picture iteration-based image style transfer. In terms of model iteration-based image style migration, this paper introduces generative models and image reconstruction decoders. Finally, the paper presents some recommendations and outlook on the future.

Keywords:

deep learning, image style transfer, computer vision

Li,J.;Lin,C.;Zhao,Y. (2024). Research on image style transfer based on deep learning. Applied and Computational Engineering,35,259-264.
Export citation

References

[1]. Gatys L A, Ecker A S, Bethge M. A neural algorithm of artistic style. arXiv preprint arXiv: 1508. 06576, 2015

[2]. Li Chuan, Wand M. Combining Markov random fields and convolutional neural networks for image synthesis Proc of IEEE Conference on Computer Vision and Pattern Recognition,2016: 2479-2486

[3]. Liao Jing, Yao Yuan, Yuan Lu, et al. Visual attribute transfer through deep image analogy. arXiv preprint arXiv: 1705. 01088, 2017

[4]. Johnson J, Alahi A, Li Feifei. Perceptual losses for real-time style transfer and super-resolution Proc of European Conference on Computer Vision, 2016: 694-711

[5]. Ulyanov D, Lebedev V, Vedaldi A, et al. Texture networks: Feed-forward synthesis of textures and stylized images. arXiv preprint arXiv:1603.03417, 2016.

[6]. Zhang H, Dana K. Multi-style generative network for real-time transfer. Proceedings of the European Conference on Computer Vision Workshops. 2018: 10-21.

[7]. Li Y, Fang C, Yang J, et al. Universal style transfer via feature transforms. Advances in neural information processing systems, 2017, 30.

[8]. Castillo C, De S, Han X, et al. Son of zorn's lemma: Targeted style transfer using instance-aware semantic segmentation, 2017 IEEE International Conference on Acoustics, Speech and Signal Processing, 2017: 1348-1352.

[9]. Gatys L A, Bethge M, Hertzmann A, et al. Preserving color in neural artistic style transfer. arXiv preprint arXiv:1606.05897, 2016.

[10]. Li Y, Liu M Y, Li X, et al. A closed-form solution to photorealistic image stylization, Proceedings of the European conference on computer vision. 2018: 453-468.


Cite this article

Li,J.;Lin,C.;Zhao,Y. (2024). Research on image style transfer based on deep learning. Applied and Computational Engineering,35,259-264.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of the 2023 International Conference on Machine Learning and Automation

ISBN:978-1-83558-295-4(Print) / 978-1-83558-296-1(Online)
Editor:Mustafa İSTANBULLU
Conference website: https://2023.confmla.org/
Conference date: 18 October 2023
Series: Applied and Computational Engineering
Volume number: Vol.35
ISSN:2755-2721(Print) / 2755-273X(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).

References

[1]. Gatys L A, Ecker A S, Bethge M. A neural algorithm of artistic style. arXiv preprint arXiv: 1508. 06576, 2015

[2]. Li Chuan, Wand M. Combining Markov random fields and convolutional neural networks for image synthesis Proc of IEEE Conference on Computer Vision and Pattern Recognition,2016: 2479-2486

[3]. Liao Jing, Yao Yuan, Yuan Lu, et al. Visual attribute transfer through deep image analogy. arXiv preprint arXiv: 1705. 01088, 2017

[4]. Johnson J, Alahi A, Li Feifei. Perceptual losses for real-time style transfer and super-resolution Proc of European Conference on Computer Vision, 2016: 694-711

[5]. Ulyanov D, Lebedev V, Vedaldi A, et al. Texture networks: Feed-forward synthesis of textures and stylized images. arXiv preprint arXiv:1603.03417, 2016.

[6]. Zhang H, Dana K. Multi-style generative network for real-time transfer. Proceedings of the European Conference on Computer Vision Workshops. 2018: 10-21.

[7]. Li Y, Fang C, Yang J, et al. Universal style transfer via feature transforms. Advances in neural information processing systems, 2017, 30.

[8]. Castillo C, De S, Han X, et al. Son of zorn's lemma: Targeted style transfer using instance-aware semantic segmentation, 2017 IEEE International Conference on Acoustics, Speech and Signal Processing, 2017: 1348-1352.

[9]. Gatys L A, Bethge M, Hertzmann A, et al. Preserving color in neural artistic style transfer. arXiv preprint arXiv:1606.05897, 2016.

[10]. Li Y, Liu M Y, Li X, et al. A closed-form solution to photorealistic image stylization, Proceedings of the European conference on computer vision. 2018: 453-468.