
Taxim is a realistic and high-speed simulation model for a vision-based tactile sensor, GelSight. Our simulation framework is the first to incorporate marker motion field simulation together with the optical simulation. We simulate the optical response to the deformation with a polynomial lookup table. This table maps the deformed geometries to pixel intensity sampled by the embedded camera. We apply the linear elastic deformation theory and the superposition principle to simulate the surface markers’ motion that is caused by the surface stretch of the elastomer. The example-based approach requires less than 100 data points from a real sensor to calibrate the simulator and enables the model to easily migrate to other GelSight sensors or their variations.

Simulation is useful for data-driven robot manipulation tasks by providing prototyping platform and unlimited data. We integrate Taxim, a simulation model for GelSight tactile sensors into a physics-based robot simulator, and model the physics of contact as a bridge in between. We leverage it on a sim-to-real transfer learning to predict the grasp stability where we calibrate our physics simulator of robot dynamics, contact model, and tactile optical simulator with real-world data, and then demonstrate the effectiveness of our system on on various objects.

We propose a new way of thinking about dynamic tactile sensing: by building a lightweight data-driven model based on the simplified physical principle. The liquid in a bottle will oscillate after a perturbation. We propose a simple physics-inspired model to explain this oscillation and use a high-resolution tactile sensor GelSight to sense it. Specifically, the viscosity and the height of the liquid determine the decay rate and frequency of the oscillation. We then train a Gaussian Process Regression model on a small amount of the real data to estimate the liquid properties. Experiments show that our model can classify three different liquids with 100% accuracy. The model can estimate volume with high precision and even estimate the concentration of sugar-water solution. It is data-efficient and can easily generalize to other liquids and bottles.

Tactile sensing has seen rapid adoption with the advent of vision-based tactile sensors. Vision-based tactile sensors provide high resolution, compact and inexpensive data to perform precise in-hand manipulation and robot-human interaction. However, the simulation of tactile sensors is still a challenge. In this project, we develop optical simulation techniques which can be used for novel sensor design and data-driven experiments using physically accurate material models and raytracing.

Grasping is one of the prime modality of manipulation for robotics. External vision sensors like RGBD cameras have traditionally been used to guide robots to perform manipulation tasks such as pick-and-place. However, these vision sensors often are positioned away from the point of grasp and provide less information about the success/failure of grasp and the mode of failure. In this work, we address one of such failures, namely failure due to the rotation of objects about the grasp point. If objects are grasped at points away from their center of gravity, they undergo rotation and this leads to grasping failure. As this rotation happens at the local region of the gripping point, it is challenging for vision sensors to detect and measure the rotation.

Knowledge of 3-D object shape is important for robot manipulation, but may not be readily available in unstructured environments. We propose a framework that incrementally reconstructs tabletop 3-D objects from a sequence of tactile images and a noisy depth-map. Our contributions include: ( i ) recovering local shape from GelSight images, learned via tactile simulation ( ii ) incremental shape mapping through inference on our Gaussian process spatial graph (GP-SG). We demonstrate visuo-tactile mapping in both our simulated and real-world datasets

When humans grasp objects in the real world, we often move our arm to hold the object in a different pose where we can use it. In contrast, typical lab settings only study the stability of the grasp immediately after lifting, without any subsequent re-positioning of the arm. However, an object’s stability could vary widely based on its holding pose, as the gravitational torque and gripper contact forces could change completely. To facilitate the study of how holding poses affect grasp stability, we present PoseIt, a novel multi-modal dataset that contains data collected from a full cycle of grasping an object, re-positioning the arm to one of the sampled poses, and shaking the object. Using data from PoseIt, we can formulate and tackle the task of predicting whether a grasped object is stable in a particular held pose. We train an LSTM classifier which achieves 85% accuracy on the proposed task, and our experimental results show that our classifiers can also generalize to unseen objects and poses. Finally, we compare different tactile sensors for the stability prediction task, demonstrating that the classifier performs better when trained on GelSight data than data collected from the WSG-DSA pressure array sensor PoseIt will be publicly released.
Photonic Tactile Skin

Tactile sensing provides the sense of touch that has been crucial for demanding manipulation tasks in robots. The project aims to design flexible tactile sensors with high sensitivity, high resolution, compact size, and easy integration with manipulators. The design is enabled by Parylene C photonics in collaboration with the Chamanzar Lab. We envision that this technology will have future applications in dexterous robotic hands, tactile-sensing prosthetics, and high sensitivity tactile wearable devices.
Soft Robotic Sensing

Soft bodies and robots provide a unique advantage for contact-rich tasks, due to their innate flexibility and compliance. However, due to the extensive contact that these bodies have with objects that they interact with, characterizing and measuring the deformation and contact force between them is critical to controlling and properly actuating soft robots. In this work, we aim to design and construct a soft robotic hand with fingers that will integrate the features of the GelSight tactile sensor, through components such as embedded cameras, patterned surfaces, and a new illumination system. The fingers of this hand will combine the unique features and structure of the GelSight tactile sensor, which measures surface normals and reconstructs height maps of contact surfaces in order to provide measurements and information on 3D geometry and contact force at high spatial resolutions, with the compliance and flexibility of soft bodies, which is beneficial for dexterous grasping and manipulation.

Multidirectional, high-resolution tactile sensing using computer vision allows robotic hands, grippers, and probes to acquire real-time tactile feedback. In particular, multidirectional tactile sensing using a device with a small form factor and a compliant surface will support the development of human-like fingers with robust force, pressure, and texture feedback and has applications in manufacturing, minimally-invasive surgery, agriculture, human-robot interaction, and more.
The Fingertip GelSight sensor provides information regarding 3D geometry and contact force by analyzing the deformation of objects on a reflective surface and reconstructing a 3D height map of the contact surface. However, the membrane used is limited in the angle that the sensor must interact with the object to obtain a contact deformation of sufficient quality, resulting in the current sensor being constrained to parallel grippers. In this work, we propose new fingertip designs of the GelSight tactile sensor, capable of obtaining contact information over a larger surface area. This will allow for the tactile sensor to interact with a larger distribution of objects at various angles.