Research Article
Open access
Published on 15 May 2025
Download pdf
Lv,H. (2025). Underwater Vision Technologies for Smart Fisheries: A Comprehensive Review of OpenCV-Based Optimization and Edge Computing Applications. Applied and Computational Engineering,151,1-9.
Export citation

Underwater Vision Technologies for Smart Fisheries: A Comprehensive Review of OpenCV-Based Optimization and Edge Computing Applications

Huijin Lv *,1,
  • 1 Dundee International Institute, Central South University, Chang Sha, Hunan Province, Country, 410006

* Author to whom correspondence should be addressed.

https://doi.org/10.54254/2755-2721/2025.22709

Abstract

With the deepening exploration of marine resources and the global emphasis on sustainable development, intelligent fishery has emerged as a critical domain for advancing ecological conservation and operational efficiency. Underwater vision technology, a cornerstone of intelligent fishery systems, encounters substantial challenges due to complex underwater environments—such as light attenuation, turbidity, biofouling, and dynamic currents—which degrade image quality and impede real-time decision-making. To address these limitations, this paper systematically reviews the integration of OpenCV-based image processing techniques with edge computing frameworks, which collectively enhance the robustness and adaptability of underwater visual systems. OpenCV’s advanced algorithms, including Contrast-Limited Adaptive Histogram Equalization for low-light enhancement, geometric transformations for distortion correction, and YOLO-based object detection, have been shown to significantly improve image clarity and target recognition accuracy. Simultaneously, edge computing alleviates latency and bandwidth constraints by enabling real-time data processing on embedded devices, achieving sub-200 ms response times for critical tasks such as dissolved oxygen monitoring and fish behavior analysis. Field validations underscore performance improvements, such as 92% recognition accuracy in coral reef monitoring and 85% mean Average Precision for aquatic species detection using MobileNet-SSD models. Despite these advancements, challenges remain in extreme conditions, computational resource optimization for edge devices, and the need for interdisciplinary collaboration to integrate marine biology insights into algorithmic design. Future research directions highlight hybrid architectures combining physics-based restoration with quantized deep learning, bio-inspired optical sensors, and socio-technical frameworks to ensure equitable technology adoption.

Keywords

OpenCV, edge computing, intelligent fishery, underwater vision, image processing technology

[1]. Zhao, F. (2015). [Underwater image edge detection based on K-means algorithm] (Master's thesis). Ocean University of China.

[2]. Zhu, M., Guan, Y., Wen, Y., & Liu, Q. (2021). Application of edge computing in optimizing data transmission for remote island observation. Ocean Development and Management, 38(6), 87-92.*

[3]. Zhang, F., Huang, Z., Shen, Q., & Deng, C. (2024). Research and application of edge computing in intelligent aquaculture. Fishery Modernization, 51(2), 53-60.*

[4]. Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., ... & Adam, H. (2017). MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861. https://arxiv.org/abs/1704.04861

[5]. Lowe, D. G. (2004). Distinctive Image Features from Scale-Invariant Keypoints. International Journal of Computer Vision, 60(2), 91–110. https://doi.org/10.1023/B:VISI.0000029664.99615.94

[6]. Qin, J. (2023). Marine pasture monitoring system based on edge computing. Journal of Physics. Conference Series, 2670(1). https://doi.org/10.1088/1742-6596/2670/1/012024

[7]. Marine Conservation Society. (2024). Coral Reef Edge Computing Pilot Report. MCS Technical Series No. 12.

[8]. Zelinsky, A. (2009). Learning OpenCV---Computer Vision with the OpenCV Library (Bradski, G.R. et al.; 2008)[On the Shelf]. IEEE Robotics & Automation Magazine, 16(3), 100–100. https://doi.org/10.1109/MRA.2009.933612

[9]. Chen, P., & Zhang, N. (2022). Analysis and research of bionic robotic fish vision system based on OpenCV. Electronic Test, 36(14), 31-33+47.

[10]. Yang, M., & Sun, H. (2022). Design and implementation of an intelligent underwater fish target tracking and recognition system. Journal of Beijing Polytechnic College, (02), 16–20.

[11]. Zhou, D. X. (2024). Research on deep learning-based underwater cultural relic target recognition algorithm [Master’s thesis, Shenyang Ligong University]. DOI: CNKI: CDMD:2.1024.784812

[12]. Knoben, W., Graf, S., Borutta, F., Tegegne, Z., Ningler, M., Blom, A., Dam, H., Evers, K., Schonenberg, R., Schütz-Trilling, A., Veerbeek, J., Arnet, R., Fretz, M., Revol, V., Valentin, T., Bridges, C. R., Schulz, S. K., van Kerkhof, J., Leenstra, A., … van Middendorp, H. (2024). An Integrated Photonic Biosensing Platform for Pathogen Detection in Aquaculture. Sensors (Basel, Switzerland), 24(16), 5241-. https://doi.org/10.3390/s24165241

[13]. Intel. (2023). OpenVINO toolkit for edge AI acceleration [White paper]. https://www.intel.com/openvino-whitepaper

[14]. Rublee, E., Rabaud, V., Konolige, K., & Bradski, G. (2011). ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 IEEE International Conference on Computer Vision (pp. 2564–2571). IEEE. https://doi.org/10.1109/ICCV.2011.6126544

[15]. OpenCV Community. (2024). OpenCV: Open source computer vision and machine learning library [Source code]. GitHub. https://github.com/opencv/opencv

[16]. Pajankar, A. (2020). Raspberry Pi Computer Vision Programming: Design and implement computer vision applications with Raspberry Pi, OpenCV, and Python 3 (Second edition.). Packt Publishing Limited.

[17]. Balakrishnan, A., Jena, G., Pongachira George, R., & Philip, J. (2021). Polydimethylsiloxane–graphene oxide nanocomposite coatings with improved anti-corrosion and anti-biofouling properties. Environmental Science and Pollution Research International, 28(6), 7404–7422. https://doi.org/10.1007/s11356-020-11068-5

[18]. Park, J.-H., Hwang, H.-W., Moon, J.-H., Yu, Y., Kim, H., Her, S.-B., Srinivasan, G., Aljanabi, M. N. A., Donatelli, R. E., & Lee, S.-J. (2019). Automated identification of cephalometric landmarks: Part 1-Comparisons between the latest deep-learning methods YOLOV3 and SSD. The Angle Orthodontist, 89(6), 903–909. https://doi.org/10.2319/022019-127.1

[19]. Fathoni, H., Yang, C.-T., Huang, C.-Y., & Chen, C.-Y. (2024). Empowered edge intelligent aquaculture with lightweight Kubernetes and GPU-embedded. Wireless Networks, 30(9), 7321–7333. https://doi.org/10.1007/s11276-023-03592-2

[20]. Zhang, Y., Jiang, Q., Liu, P., Gao, S., Pan, X., & Zhang, C. (2023). Underwater Image Enhancement Using Deep Transfer Learning Based on a Color Restoration Model. IEEE Journal of Oceanic Engineering, 48(2), 1–26. https://doi.org/10.1109/JOE.2022.3227393

[21]. Koubaa, A., Ammar, A., Abdelkader, M., Alhabashi, Y., & Ghouti, L. (2023). AERO: AI-Enabled Remote Sensing Observation with Onboard Edge Computing in UAVs. Remote Sensing (Basel, Switzerland), 15(7), 1873-. https://doi.org/10.3390/rs15071873

[22]. Hastig, G. M., & Sodhi, M. S. (2020). Blockchain for Supply Chain Traceability: Business Requirements and Critical Success Factors. Production and Operations Management, 29(4), 935–954. https://doi.org/10.1111/poms.13147

[23]. NVIDIA. (2023). Jetson Nano Developer Kit Specifications. NVIDIA Developer Documentation.

[24]. Guangdong Fisheries Bureau. (2024). Pilot Report on Smart Shrimp Farming. Internal Technical Report.

[25]. Garg, D., Garg, N. K., & Kumar, M. (2018). Underwater image enhancement using blending of CLAHE and percentile methodologies. Multimedia Tools and Applications, 77(20), 26545–26561. https://doi.org/10.1007/s11042-018-5878-8

[26]. Ahmed, S., Khan, M. F. R., Labib, M. F. A., & Chowdhury, A. E. (2020). An observation of vision based underwater object detection and tracking. In Proceedings of the 2020 3rd International Conference on Emerging Technologies in Computer Engineering: Machine Learning and Internet of Things (ICETCE) (pp. 117-122). IEEE. https://doi.org/10.1109/ICETCE48199.2020.9091752

[27]. Algarin-Pinto, J. A., Garza-Castanon, L. E., Vargas-Martinez, A., Minchala-Avila, L. I., & Payeur, P. (2025). Intelligent Motion Control to Enhance the Swimming Performance of a Biomimetic Underwater Vehicle Using Reinforcement Learning Approach. IEEE Access, 1–1. https://doi.org/10.1109/ACCESS.2025.3544482

[28]. Hu, W., Guo, S., He, L., Wang, L., Yuan, Y., & Nagaraj, B. (2022). 5G Edge Computing Access Node Selection Algorithm Based on Energy Efficiency and Delay. Wireless Communications and Mobile Computing, 2022, 1–7. https://doi.org/10.1155/2022/4491961

[29]. FAO. (2022). Guidelines for Sustainable Aquaculture. FAO Fisheries Technical Paper No. 638.

Cite this article

Lv,H. (2025). Underwater Vision Technologies for Smart Fisheries: A Comprehensive Review of OpenCV-Based Optimization and Edge Computing Applications. Applied and Computational Engineering,151,1-9.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of the 3rd International Conference on Software Engineering and Machine Learning

Conference website: https://2025.confseml.org/
ISBN:978-1-80590-091-7(Print) / 978-1-80590-092-4(Online)
Conference date: 2 July 2025
Editor:Marwan Omar
Series: Applied and Computational Engineering
Volume number: Vol.151
ISSN:2755-2721(Print) / 2755-273X(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).