Research Article
Open access
Published on 31 May 2023
Download pdf
Guo,M. (2023). Image super-resolution techniques using deep neural networks. Applied and Computational Engineering,5,224-236.
Export citation

Image super-resolution techniques using deep neural networks

Meilin Guo *,1,
  • 1 Australian National University

* Author to whom correspondence should be addressed.

https://doi.org/10.54254/2755-2721/5/20230567

Abstract

Super-resolution (SR) based on deep convolutional neural networks is a rapidly developing field with many real-world applications. In this paper, we examine cutting-edge super-resolution neural networks in-depth using freshly released difficult datasets to test single-image SR. We present a taxonomy that divides existing techniques into six categories, including upsampling, residual, recursive, dense connection, attention-based, and loss function designs. This taxonomy is applicable to deep learning-based SR networks. The comprehensive analysis shows that in the past few years, the accuracy has increased steadily and rapidly, while the complexity of the model and the accessibility of large-scale information have also increased accordingly. It has been noted that the present techniques have greatly outperformed the past techniques that were indicated as benchmarks. On this basis, this paper will put forward some suggestions for future research.

Keywords

Image Super-resolution (SR), Deep Learning, Convolutional Neural Networks (CNNs), Computer Vision, Survey

[1]. R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Region-based convolutional networks for accurate object detection and segmentation," IEEE transactions on pattern analysis and machine intelligence, vol. 38, no. 1, pp. 142-158, 2015.

[2]. A. Swaminathan, M. Wu, and K. R. Liu, "Digital image forensics via intrinsic fingerprints," IEEE transactions on information forensics and security, vol. 3, no. 1, pp. 101-117, 2008.

[3]. H. Greenspan, "Super-resolution in medical imaging," The computer journal, vol. 52, no. 1, pp. 43-63, 2009.

[4]. T. Lillesand, R. W. Kiefer, and J. Chipman, Remote sensing and image interpretation. John Wiley & Sons, 2015.

[5]. S. P. Mudunuri and S. Biswas, "Low resolution face recognition across variations in pose and illumination," IEEE transactions on pattern analysis and machine intelligence, vol. 38, no. 5, pp. 1034-1040, 2015.

[6]. S. Anwar, S. Khan, and N. Barnes, "A deep journey into super-resolution: A survey," ACM Computing Surveys (CSUR), vol. 53, no. 3, pp. 1-34, 2020.

[7]. X. Hu, H. Mu, X. Zhang, Z. Wang, T. Tan, and J. Sun, "Meta-SR: A magnification-arbitrary network for super-resolution," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 1575-1584.

[8]. Y. Chen, S. Liu, and X. Wang, "Learning continuous image representation with local implicit image function," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 8628-8638.

[9]. V. K. Ha, J. Ren, X. Xu, S. Zhao, G. Xie, and V. M. Vargas, "Deep learning based single image super-resolution: A survey," in International Conference on Brain Inspired Cognitive Systems, 2018: Springer, pp. 106-119.

[10]. R. Keys, "Cubic convolution interpolation for digital image processing," IEEE transactions on acoustics, speech, and signal processing, vol. 29, no. 6, pp. 1153-1160, 1981.

[11]. C. Dong, C. C. Loy, K. He, and X. Tang, "Learning a deep convolutional network for image super-resolution," in European conference on computer vision, 2014: Springer, pp. 184-199.

[12]. C. Dong, C. C. Loy, and X. Tang, "Accelerating the super-resolution convolutional neural network," in European conference on computer vision, 2016: Springer, pp. 391-407.

[13]. W. Shi et al., "Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 1874-1883.

[14]. N. Ahn, B. Kang, and K.-A. Sohn, "Fast, accurate, and lightweight super-resolution with cascading residual network," in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 252-268.

[15]. W.-S. Lai, J.-B. Huang, N. Ahuja, and M.-H. Yang, "Deep laplacian pyramid networks for fast and accurate super-resolution," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 624-632.

[16]. W.-S. Lai, J.-B. Huang, N. Ahuja, and M.-H. Yang, "Fast and accurate image super-resolution with deep laplacian pyramid networks," IEEE transactions on pattern analysis and machine intelligence, vol. 41, no. 11, pp. 2599-2613, 2018.

[17]. Y. Wang, F. Perazzi, B. McWilliams, A. Sorkine-Hornung, O. Sorkine-Hornung, and C. Schroers, "A fully progressive approach to single-image super-resolution," in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2018, pp. 864-873.

[18]. Z. Wang, J. Chen, and S. C. Hoi, "Deep learning for image super-resolution: A survey," IEEE transactions on pattern analysis and machine intelligence, vol. 43, no. 10, pp. 3365-3387, 2020.

[19]. K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.

[20]. R. Timofte, V. De Smet, and L. Van Gool, "Anchored neighborhood regression for fast example-based super-resolution," in Proceedings of the IEEE international conference on computer vision, 2013, pp. 1920-1927.

[21]. R. Timofte, V. De Smet, and L. Van Gool, "A+: Adjusted anchored neighborhood regression for fast super-resolution," in Asian conference on computer vision, 2014: Springer, pp. 111-126.

[22]. B. Lim, S. Son, H. Kim, S. Nah, and K. Mu Lee, "Enhanced deep residual networks for single image super-resolution," in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2017, pp. 136-144.

[23]. J. Kim, J. K. Lee, and K. M. Lee, "Accurate image super-resolution using very deep convolutional networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 1646-1654.

[24]. J. Kim, J. K. Lee, and K. M. Lee, "Deeply-recursive convolutional network for image super-resolution," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 1637-1645.

[25]. Y. Tai, J. Yang, and X. Liu, "Image super-resolution via deep recursive residual network," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 3147-3155.

[26]. G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, "Densely connected convolutional networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700-4708.

[27]. T. Tong, G. Li, X. Liu, and Q. Gao, "Image super-resolution using dense skip connections," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 4799-4807.

[28]. Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu, "Residual dense network for image super-resolution," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 2472-2481.

[29]. Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu, "Image super-resolution using very deep residual channel attention networks," in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 286-301.

[30]. C. Ledig et al., "Photo-realistic single image super-resolution using a generative adversarial network," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4681-4690.

[31]. J. Johnson, A. Alahi, and L. Fei-Fei, "Perceptual losses for real-time style transfer and super-resolution," in European conference on computer vision, 2016: Springer, pp. 694-711.

[32]. R. Timofte, S. Gu, J. Wu, and L. Van Gool, "Ntire 2018 challenge on single image super-resolution: Methods and results," in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2018, pp. 852-863.

[33]. H. Zhao, O. Gallo, I. Frosio, and J. Kautz, "Loss functions for image restoration with neural networks," IEEE Transactions on computational imaging, vol. 3, no. 1, pp. 47-57, 2016.

[34]. A. Dosovitskiy and T. Brox, "Generating images with perceptual similarity metrics based on deep networks," Advances in neural information processing systems, vol. 29, 2016.

[35]. X. Wang et al., "Esrgan: Enhanced super-resolution generative adversarial networks," in Proceedings of the European conference on computer vision (ECCV) workshops, 2018, pp. 0-0.

[36]. X. Wang, K. Yu, C. Dong, and C. C. Loy, "Recovering realistic texture in image super-resolution by deep spatial feature transform," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 606-615.

[37]. K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition, CoRR abs/1409.1556," arXiv preprint arXiv:1409.1556, 2015.

[38]. M. S. Sajjadi, B. Scholkopf, and M. Hirsch, "Enhancenet: Single image super-resolution through automated texture synthesis," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 4491-4500.

[39]. I. Goodfellow et al., "Generative adversarial nets," Advances in neural information processing systems, vol. 27, 2014.

[40]. M. Bevilacqua, A. Roumy, C. Guillemot, and M. L. Alberi-Morel, "Low-complexity single-image super-resolution based on nonnegative neighbor embedding," 2012.

[41]. R. Zeyde, M. Elad, and M. Protter, "On single image scale-up using sparse-representations," in International conference on curves and surfaces, 2010: Springer, pp. 711-730.

[42]. D. Martin, C. Fowlkes, D. Tal, and J. Malik, "A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics," in Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001, 2001, vol. 2: IEEE, pp. 416-423.

[43]. J.-B. Huang, A. Singh, and N. Ahuja, "Single image super-resolution from transformed self-exemplars," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 5197-5206.

[44]. E. Agustsson and R. Timofte, "NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study (supplementary material)."

[45]. A. Fujimoto, T. Ogawa, K. Yamamoto, Y. Matsui, T. Yamasaki, and K. Aizawa, "Manga109 dataset and creation of metadata," in Proceedings of the 1st international workshop on comics analysis, processing and understanding, 2016, pp. 1-5.

[46]. S. Anwar and N. Barnes, "Densely residual laplacian super-resolution," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020.

[47]. J. Yang, J. Wright, T. S. Huang, and Y. Ma, "Image super-resolution via sparse representation," IEEE transactions on image processing, vol. 19, no. 11, pp. 2861-2873, 2010.

[48]. A. Jolicoeur-Martineau, "The relativistic discriminator: a key element missing from standard GAN," arXiv preprint arXiv:1807.00734, 2018.

[49]. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, "Image quality assessment: from error visibility to structural similarity," IEEE transactions on image processing, vol. 13, no. 4, pp. 600-612, 2004.

[50]. Z. Wang, A. C. Bovik, and L. Lu, "Why is image quality assessment so difficult?," in 2002 IEEE International conference on acoustics, speech, and signal processing, 2002, vol. 4: IEEE, pp. IV-3313-IV-3316.

[51]. R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, "The unreasonable effectiveness of deep features as a perceptual metric," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 586-595.

[52]. E. Prashnani, H. Cai, Y. Mostofi, and P. Sen, "Pieapp: Perceptual image-error assessment through pairwise preference," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 1808-1817.

[53]. Z. Wang, E. P. Simoncelli, and A. C. Bovik, "Multiscale structural similarity for image quality assessment," in The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, 2003, vol. 2: Ieee, pp. 1398-1402.

[54]. A. Bulat, J. Yang, and G. Tzimiropoulos, "To learn image super-resolution, use a gan to learn how to do image degradation first," in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 185-200.

Cite this article

Guo,M. (2023). Image super-resolution techniques using deep neural networks. Applied and Computational Engineering,5,224-236.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of the 3rd International Conference on Signal Processing and Machine Learning

Conference website: http://www.confspml.org
ISBN:978-1-915371-57-7(Print) / 978-1-915371-58-4(Online)
Conference date: 25 February 2023
Editor:Omer Burak Istanbullu
Series: Applied and Computational Engineering
Volume number: Vol.5
ISSN:2755-2721(Print) / 2755-273X(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).