(주)마이크로시스템 소프트웨어 개발자 채용
MERRIC인
Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic Grasp Metrics
Jeffrey Mahler(Cormell University)
American | Robotics, Cormell University

■ View full text
Robotics, Cormell University
https://arxiv.org/abs/1703.09312


■ Researchers
Jeffrey Mahler
Cormell University
Jacky Liang, Sherdil Niyaz, Michael Laskey, Richard Doan, Xinyu Liu, Juan Aparicio Ojea, Ken Goldberg
 

 

■ Abstract
To reduce data collection time for deep learning of robust robotic grasp plans, we explore training from a synthetic dataset of 6.7 million point clouds, grasps, and analytic grasp metrics generated from thousands of 3D models from Dex-Net 1.0 in randomized poses on a table. We use the resulting dataset, Dex-Net 2.0, to train a Grasp Quality Convolutional Neural Network (GQ-CNN) model that rapidly predicts the probability of success of grasps from depth images, where grasps are specified as the planar position, angle, and depth of a gripper relative to an RGB-D sensor. Experiments with over 1,000 trials on an ABB YuMi comparing grasp planning methods on singulated objects suggest that a GQ-CNN trained with only synthetic data from Dex-Net 2.0 can be used to plan grasps in 0.8sec with a success rate of 93% on eight known objects with adversarial geometry and is 3x faster than registering point clouds to a precomputed dataset of objects and indexing grasps. The Dex-Net 2.0 grasp planner also has the highest success rate on a dataset of 10 novel rigid objects and achieves 99% precision (one false positive out of 69 grasps classified as robust) on a dataset of 40 novel household objects, some of which are articulated or deformable. Code, datasets, videos, and supplementary material are available at this http URL .
 

 

 

  • Robotics
인쇄 Facebook Twitter 스크랩

  전체댓글 0

[로그인]

댓글 입력란
프로필 이미지
0/500자