Research Article
Open access
Published on 20 March 2025
Download pdf
Wu,L. (2025). Fusing Multiple Exposure Images for HDR Images by Deep Learning. Applied and Computational Engineering,138,213-218.
Export citation

Fusing Multiple Exposure Images for HDR Images by Deep Learning

Longyao Wu *,1,
  • 1 Ulster College, Shanxi University of Science & Technology, Xian, China

* Author to whom correspondence should be addressed.

https://doi.org/10.54254/2755-2721/2025.21531

Abstract

This paper explores the application of deep learning techniques in the fusion of high dynamic range (HDR) images, emphasizing its transformative impact on traditional HDR imaging methods. HDR images are renowned for capturing a broader range of luminosity; however, traditional methods face challenges such as camera shake and ghosting in dynamic scenes. The introduction of deep learning has automated and enhanced the HDR image generation process, particularly in image fusion, deblurring, and artifact correction. This paper reviews relevant deep learning algorithms and architectures, analyzes the strengths and limitations of current HDR imaging approaches, and suggests future research directions aimed at improving efficiency, accuracy, and applicability across various domains.

Keywords

High Dynamic Range (HDR), Deep Learning, Image Fusion, Neural Networks, Ghosting Correction

[1]. McCann, J. J., & Rizzi, A. (2011). The art and science of HDR imaging. John Wiley & Sons.

[2]. Debevec, P. E., & Malik, J. (2023). Recovering high dynamic range radiance maps from photographs. In Seminal Graphics Papers: Pushing the Boundaries, Volume 2 (pp. 643-652).

[3]. Mann, S. (1993). Compositing multiple pictures of the same scene. In Proc. IS&T Annual Meeting, 1993 (pp. 50-52).

[4]. Debevec, P., Reinhard, E., Ward, G., & Pattanaik, S. (2004). High dynamic range imaging. In ACM SIGGRAPH 2004 Course Notes (pp. 14-es).

[5]. Kalantari, N. K., & Ramamoorthi, R. (2017). Deep high dynamic range imaging of dynamic scenes. ACM Trans. Graph., 36(4), 144-1.

[6]. Wu, S., Xu, J., Tai, Y. W., & Tang, C. K. (2018). Deep high dynamic range imaging with large foreground motions. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 117-132).

[7]. Yan, Q., Gong, D., Shi, Q., Hengel, A. V. D., Shen, C., Reid, I., & Zhang, Y. (2019). Attention-guided network for ghost-free high dynamic range imaging. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 1751-1760).

[8]. Metwaly, K., & Monga, V. (2020, May). Attention-mask dense merger (attendense) deep hdr for ghost removal. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 2623-2627). IEEE.

[9]. Choi, S., Cho, J., Song, W., Choe, J., Yoo, J., & Sohn, K. (2020). Pyramid inter-attention for high dynamic range imaging. Sensors, 20(18), 5102.

[10]. Yan, Q., Zhang, L., Liu, Y., Zhu, Y., Sun, J., Shi, Q., & Zhang, Y. (2020). Deep HDR imaging via a non-local network. IEEE Transactions on Image Processing, 29, 4308-4322.

[11]. Ye, Q., Xiao, J., Lam, K. M., & Okatani, T. (2021, October). Progressive and selective fusion network for high dynamic range imaging. In Proceedings of the 29th ACM International Conference on Multimedia (pp. 5290-5297).

[12]. Niu, Y., Wu, J., Liu, W., Guo, W., & Lau, R. W. (2021). Hdr-gan: Hdr image reconstruction from multi-exposed ldr images with large motions. IEEE Transactions on Image Processing, 30, 3885-3896.

[13]. Liu, Z., Wang, Y., Zeng, B., & Liu, S. (2022, October). Ghost-free high dynamic range imaging with context-aware transformer. In European Conference on computer vision (pp. 344-360). Cham: Springer Nature Switzerland.

[14]. Hu, T., Yan, Q., Qi, Y., & Zhang, Y. (2024). Generating content for hdr deghosting from frequency view. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 25732-25741).

[15]. Debevec, P. E., & Malik, J. (1997) Recovering High Dynamic Range Radiance Maps from Photographs. In SIGGRAPH '97: Proceedings of the 24th annual conference on Computer graphics and interactive techniques. Association for Computing Machinery, Inc.

[16]. Drago, F., Myszkowski, K., Annen, T., & Chiba, N. (2003, September). Adaptive logarithmic mapping for displaying high contrast scenes. In Computer graphics forum (Vol. 22, No. 3, pp. 419-426). Oxford, UK: Blackwell Publishing, Inc.

[17]. Reinhard, E., Ward, G., Pattanaik, S., & Debevec, P. (2005). High Dynamic Range Imaging: Acquisition, Display, and Image-based Lighting, Chapter 2, Scetion 2.9: Gamma Display.

[18]. Lucas, B. D., & Kanade, T. (1981, August). An iterative image registration technique with an application to stereo vision. In IJCAI'81: 7th international joint conference on Artificial intelligence (Vol. 2, pp. 674-679).

[19]. Liu, C. (2009). Beyond pixels: exploring new representations and applications for motion analysis (Doctoral dissertation, Massachusetts Institute of Technology).

[20]. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2020). Generative adversarial networks. Communications of the ACM, 63(11), 139-144.

[21]. Vaswani, A. (2017). Attention is all you need. Advances in Neural Information Processing Systems.

[22]. Dosovitskiy, A. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.

[23]. Liu, Z., Wang, Y., Zeng, B., & Liu, S. (2022, October). Ghost-free high dynamic range imaging with context-aware transformer. In European Conference on computer vision (pp. 344-360). Cham: Springer Nature Switzerland.

[24]. Prabhakar, K. R., Arora, R., Swaminathan, A., Singh, K. P., & Babu, R. V. (2019, May). A fast, scalable, and reliable deghosting method for extreme exposure fusion. In 2019 IEEE International Conference on Computational Photography (ICCP) (pp. 1-8). IEEE.

[25]. Pérez-Pellitero, E., Catley-Chandar, S., Leonardis, A., & Timofte, R. (2021). NTIRE 2021 challenge on high dynamic range imaging: Dataset, methods and results. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 691-700).

[26]. Froehlich, J., Grandinetti, S., Eberhardt, B., Walter, S., Schilling, A., & Brendel, H. (2014, March). Creating cinematic wide gamut HDR-video for the evaluation of tone mapping operators and HDR-displays. In Digital photography X (Vol. 9023, pp. 279-288). SPIE.

[27]. Tel, S., Wu, Z., Zhang, Y., Heyrman, B., Demonceaux, C., Timofte, R., & Ginhac, D. (2023). Alignment-free hdr deghosting with semantics consistent transformer. arXiv preprint arXiv:2305.18135.

[28]. Zhang, Z., Wang, H., Liu, S., Wang, X., Lei, L., & Zuo, W. (2023). Self-supervised high dynamic range imaging with multi-exposure images in dynamic scenes. arXiv preprint arXiv:2310.01840.

[29]. Song, J. W., Park, Y. I., Kong, K., Kwak, J., & Kang, S. J. (2022, October). Selective transhdr: Transformer-based selective hdr imaging using ghost region mask. In European Conference on Computer Vision (pp. 288-304). Cham: Springer Nature Switzerland.

[30]. Yan, Q., Gong, D., Shi, J. Q., Van Den Hengel, A., Shen, C., Reid, I., & Zhang, Y. (2022). Dual-attention-guided network for ghost-free high dynamic range imaging. International Journal of Computer Vision, 1-19.

[31]. Yan, Q., Zhang, S., Chen, W., Tang, H., Zhu, Y., Sun, J., ... & Zhang, Y. (2023). Smae: Few-shot learning for hdr deghosting with saturation-aware masked autoencoders. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 5775-5784).

[32]. Prabhakar, K. R., Senthil, G., Agrawal, S., Babu, R. V., & Gorthi, R. K. S. S. (2021). Labeled from unlabeled: Exploiting unlabeled data for few-shot deep hdr deghosting. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 4875-4885).

[33]. Yan, Q., Hu, T., Sun, Y., Tang, H., Zhu, Y., Dong, W., ... & Zhang, Y. (2023). Towards high-quality hdr deghosting with conditional diffusion models. IEEE Transactions on Circuits and Systems for Video Technology.

[34]. Wu, S., Xu, J., Tai, Y. W., & Tang, C. K. (2018). End-to-end deep HDR imaging with large foreground motions. In European Conference on Computer Vision (Vol. 1, No. 2).

Cite this article

Wu,L. (2025). Fusing Multiple Exposure Images for HDR Images by Deep Learning. Applied and Computational Engineering,138,213-218.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of the 3rd International Conference on Software Engineering and Machine Learning

Conference website: https://2025.confseml.org/
ISBN:978-1-83558-981-6(Print) / 978-1-83558-982-3(Online)
Conference date: 2 July 2025
Editor: Marwan Omar
Series: Applied and Computational Engineering
Volume number: Vol.138
ISSN:2755-2721(Print) / 2755-273X(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).