Planar grasp detection using generative multi-column convolutional neural networks

Research Article
Open access

Planar grasp detection using generative multi-column convolutional neural networks

Haikun Yuan 1* , Haipeng Huang 2
  • 1 Xidian University    
  • 2 Nanjing University of Posts and Telecommunications    
  • *corresponding author 20049200117@stu.xidian.edu.cn
Published on 25 September 2023 | https://doi.org/10.54254/2755-2721/10/20230194
ACE Vol.10
ISSN (Print): 2755-273X
ISSN (Online): 2755-2721
ISBN (Print): 978-1-83558-009-7
ISBN (Online): 978-1-83558-010-3

Abstract

This paper presents an accurate, real-time generative multi-column convolutional neural network for two-dimension planner grasp detection. A multi-column structure is used to improve the ability of the network to extract features of different scales. Three parallel channels with three different reception fields can make the network learn different scales of features, which result in the network is more adaptable to a complex environment. This network overcomes some shortcomings like long computing period and by using a generative method rather than sampling grasping candidate. Our network has short computing time because of its light structure, which can be deployed in some close-loop situations. While training the network, we find that some labels in Cornell database is not suitable for the planner grasping detection training, for some specific labels represent different angel of grasping. By comparison with other models, our model’s accuracy on Cornell grasping dataset is higher, reaching 94%, and our model runs at 13 frames per second.

Keywords:

grasp detection, generative method, multi-column CNN, Cornell grasping dataset.

Yuan,H.;Huang,H. (2023). Planar grasp detection using generative multi-column convolutional neural networks. Applied and Computational Engineering,10,282-288.
Export citation

References

[1]. Anis S, Sahar E and Philippe B 2012 An Overview of 3D object grasp synthesis algorithms. Robotics and Autonomous Systems, vol 3 p 326–336.

[2]. Bicchi A and Kumar V 2000 Robotic grasping and contact: a review Icra Millennium Conference IEEE International Conference on Robotics & Automation Symposia vol 1.

[3]. Bohg J, Morales A, Asfour T and Kragic D 2014 Data-driven grasp synthesis – a survey IEEE Transactions on Robotics vol 2 p 289-309.

[4]. Lenz I, Lee H and Saxena 2013 A deep learning for detecting robotic grasps The International Journal of Robotics Research vol 4-5 p 34.

[5]. Chen J, Xie Z and Dames P 2022 The semantic PHD filter for multi-class target tracking: From theory to practice Robotics and Autonomous Systems 149 103947.

[6]. Chen J and Dames P 2022 Multi-class target tracking using the semantic phd filter In Robotics Research: The 19th International Symposium ISRR p 526-541.

[7]. Mahler J , Liang J ,Niyaz S, Laskey M, Doan R, Liu X, Ojea JA and Goldberg K 2017 Dex-Net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics.

[8]. Morrison D, Corke P and Leitner J 2018 Closing the loop for robotic grasping: A Real-time, generative grasp synthesis approach Robotics: Science and Systems (RSS).

[9]. Yun J, Moseson S and Saxena A 2011 Efficient grasping from RGBD images: learning using a new rectangle representation 2011 IEEE International conference on robotics and automation 3304-3311.

[10]. Pinto L and Gupta A 2016 Supersizing self-supervision: learning to grasp from 50K tries and 700 Robot hours 2016 IEEE International Conference on Robotics and Automation (ICRA) 3406—3413.

[11]. Redmon J and Angelova A 2014 Real-time grasp detection using convolutional neural networks Proceedings IEEE International Conference on Robotics & Automation 1316-1322.

[12]. Simonyan K and Zisserman A 2014 Very deep convolutional networks for large-scale image recognition Computer Science CoRR abs/1409.1556.

[13]. Krizhevsky A, Sutskever I and Hinton G 2012 ImageNet classification with deep convolutional neural networks Advances in neural information processing systems vol 2 p 25.

[14]. Alex K, Ilya S and Geoffrey E June 2017 ImageNet classification with deep convolutional neural networks Communications of the ACM vol 60 p 84–90.

[15]. Amaury D, Emmanuel D and Liming C 2018 Jacquard: a large-scale dataset for robotic grasp detection RSJ International Conference on Intelligent Robots and Systems p 3511-3516.

[16]. Morrison D, Corke P and Leitner J 2019 Learning robust, real-time, reactive robotic grasping The International Journal of Robotics Research. 39(2-3) 183-201.


Cite this article

Yuan,H.;Huang,H. (2023). Planar grasp detection using generative multi-column convolutional neural networks. Applied and Computational Engineering,10,282-288.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of the 2023 International Conference on Mechatronics and Smart Systems

ISBN:978-1-83558-009-7(Print) / 978-1-83558-010-3(Online)
Editor:Alan Wang, Seyed Ghaffar
Conference website: https://2023.confmss.org/
Conference date: 24 June 2023
Series: Applied and Computational Engineering
Volume number: Vol.10
ISSN:2755-2721(Print) / 2755-273X(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).

References

[1]. Anis S, Sahar E and Philippe B 2012 An Overview of 3D object grasp synthesis algorithms. Robotics and Autonomous Systems, vol 3 p 326–336.

[2]. Bicchi A and Kumar V 2000 Robotic grasping and contact: a review Icra Millennium Conference IEEE International Conference on Robotics & Automation Symposia vol 1.

[3]. Bohg J, Morales A, Asfour T and Kragic D 2014 Data-driven grasp synthesis – a survey IEEE Transactions on Robotics vol 2 p 289-309.

[4]. Lenz I, Lee H and Saxena 2013 A deep learning for detecting robotic grasps The International Journal of Robotics Research vol 4-5 p 34.

[5]. Chen J, Xie Z and Dames P 2022 The semantic PHD filter for multi-class target tracking: From theory to practice Robotics and Autonomous Systems 149 103947.

[6]. Chen J and Dames P 2022 Multi-class target tracking using the semantic phd filter In Robotics Research: The 19th International Symposium ISRR p 526-541.

[7]. Mahler J , Liang J ,Niyaz S, Laskey M, Doan R, Liu X, Ojea JA and Goldberg K 2017 Dex-Net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics.

[8]. Morrison D, Corke P and Leitner J 2018 Closing the loop for robotic grasping: A Real-time, generative grasp synthesis approach Robotics: Science and Systems (RSS).

[9]. Yun J, Moseson S and Saxena A 2011 Efficient grasping from RGBD images: learning using a new rectangle representation 2011 IEEE International conference on robotics and automation 3304-3311.

[10]. Pinto L and Gupta A 2016 Supersizing self-supervision: learning to grasp from 50K tries and 700 Robot hours 2016 IEEE International Conference on Robotics and Automation (ICRA) 3406—3413.

[11]. Redmon J and Angelova A 2014 Real-time grasp detection using convolutional neural networks Proceedings IEEE International Conference on Robotics & Automation 1316-1322.

[12]. Simonyan K and Zisserman A 2014 Very deep convolutional networks for large-scale image recognition Computer Science CoRR abs/1409.1556.

[13]. Krizhevsky A, Sutskever I and Hinton G 2012 ImageNet classification with deep convolutional neural networks Advances in neural information processing systems vol 2 p 25.

[14]. Alex K, Ilya S and Geoffrey E June 2017 ImageNet classification with deep convolutional neural networks Communications of the ACM vol 60 p 84–90.

[15]. Amaury D, Emmanuel D and Liming C 2018 Jacquard: a large-scale dataset for robotic grasp detection RSJ International Conference on Intelligent Robots and Systems p 3511-3516.

[16]. Morrison D, Corke P and Leitner J 2019 Learning robust, real-time, reactive robotic grasping The International Journal of Robotics Research. 39(2-3) 183-201.