Grasping is one of the prime modalities of manipulation for robotics. External vision sensors like RGBD cameras have traditionally been used to guide robots to perform manipulation tasks such as pick-and-place. However, these vision sensors often are positioned away from the point of grasp and provide less information about the success/failure of grasp and the mode of failure. In this work, we address one of such failures, namely failure due to rotation of objects about the grasp point. As shown in the figure below (left), if objects are grasped at points away from their center of gravity, they undergo rotation and this leads to grasp failure. As this rotation happens at the local region of the gripping point, it is challenging for vision sensors to detect and measure the rotation.
We solve this problem using a high resolution, vision based tactile sensor, the GelSight sensor. GelSight sensor outputs images of its gel pad with markers painted on it as shown in the figure below (middle and right).