Research Article
Open access
Published on 16 April 2025
Download pdf
Tian,Y.;Liu,B.;Zhu,B.;Zhang,J. (2025). Improved YOLO11-based inspection method for peg and hole parts. Advances in Engineering Innovation,16(4),24-36.
Export citation

Improved YOLO11-based inspection method for peg and hole parts

Yongpeng Tian 1, Bo Liu 2, Bingyuan Zhu 3, Jian Zhang *,4,
  • 1 Tongji University
  • 2 Tongji University
  • 3 Tongji University
  • 4 Tongji University

* Author to whom correspondence should be addressed.

https://doi.org/10.54254/2977-3903/2025.22385

Abstract

Improved YOLO11-based shaft-hole parts detection method In the assembly scene of shaft-hole parts, there are often occlusions and changes in the viewing angle, making real-time accurate detection of the target part there are certain difficulties, this paper is based on the structure of the YOLO11 network, respectively, through the introduction of the RepViT module, the edge information-based feature fusion module EFI and P-EfficientHead detection head, a multi-module fusion improved YOLO11 network for shaft hole assembly scenarios is proposed, and finally experimentally verified on the Pascal VOC dataset with, mAP50 and mAP50-95 are improved by 3.7% and 4%, and the precision and recall are improved by 2.9% and 2.5%, respectively, and the validation on the homemade dataset is also obtained with better results, and the final results show that the proposed multi-module fusion to improve YOLO11 network in this paper has better performance.

Keywords

YOLO11, peg-in-hole, object detection, multi-feature fusion

[1]. Zou, Z., Chen, K., Shi, Z., Guo, Y., & Ye, J. (2023). Object detection in 20 years: A survey. Proceedings of the IEEE, 111(3), 257-276. https://doi.org/10.1109/JPROC.2023.3238524

[2]. LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324. https://doi.org/10.1109/5.726791

[3]. Girshick, R. (2015). Fast R-CNN. In Proceedings of the 2015 IEEE International Conference on Computer Vision (pp. 1440-1448). IEEE Computer Society. https://doi.org/10.1109/ICCV.2015.169

[4]. Dai, X., Chen, Y., Yang, J., Zhang, P., Yuan, L., & Zhang, L. (2021). Dynamic DETR: End-to-end object detection with dynamic attention. 2021 IEEE/CVF International Conference on Computer Vision (ICCV) (pp. 2968-2977). IEEE. https://doi.org/10.1109/ICCV48922.2021.00298

[5]. Zhai, S., Shang, D., Wang, S., & Dong, S. (2020). DF-SSD: An improved SSD object detection algorithm based on DenseNet and feature fusion. IEEE Access, 8, 24344-24357. https://doi.org/10.1109/ACCESS.2020.2971026

[6]. Jiang, P., Ergu, D., Liu, F., Cai, Y., & Ma, B. (2022). A review of YOLO algorithm developments. Procedia Computer Science, 199, 1066-1073.

[7]. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems (pp. 6000-6010). Curran Associates.

[8]. Khanam, R., & Hussain, M. (2024). YOLOv11: An overview of the key architectural enhancements. arXiv preprint arXiv:2410.17725.

[9]. Khanam, R., Asghar, T., & Hussain, M. (2025). Comparative performance evaluation of YOLOv5, YOLOv8, and YOLOv11 for solar panel defect detection. Solar, 5(1), 6.

[10]. Wang, G. Q., Chen, J. B., Li, C. Z., & Lu, S. (2025). Edge-YOLO: Lightweight multi-scale feature extraction for industrial surface inspection. IEEE Access. in press.

[11]. Ding, X., Zhang, X., Ma, N., Han, J., Ding, G., & Sun, J. (2021). RepVGG: Making VGG-style ConvNets great again. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 13728-13737). IEEE. https://doi.org/10.1109/CVPR46437.2021.01352

[12]. Aoun, A., Masadeh, M., & Tahar, S. (2022). On the design of approximate Sobel filter. In 2022 International Conference on Microelectronics (ICM) (pp. 102-106). IEEE.

[13]. Chen, J., Kao, S.H., He, H., Zhuo, W., Wen, S., Lee, C.H., & Chan, S.H. G. (2023). Run, don't walk: Chasing higher FLOPS for faster neural networks. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 12021-12031). IEEE. https://doi.org/10.1109/CVPR52729.2023.01157

[14]. Everingham, M., Van Gool, L., Williams, C. K. I., Winn, J., & Zisserman, A. (2010). The pascal visual object classes (VOC) challenge. International Journal of Computer Vision, 88, 303-338.

[15]. Shen Y, Jia Q, Wang R, Huang Z, & Chen G. Learning-Based Visual Servoing for High-Precision Peg-in-Hole Assembly. Actuators. 2023; 12(4):144. https://doi.org/10.3390/act12040144

Cite this article

Tian,Y.;Liu,B.;Zhu,B.;Zhang,J. (2025). Improved YOLO11-based inspection method for peg and hole parts. Advances in Engineering Innovation,16(4),24-36.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Journal:Advances in Engineering Innovation

Volume number: Vol.16
ISSN:2977-3903(Print) / 2977-3911(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).