Visual-Tactile Geometric Reasoning

Published:

This work provides an architecture which uses a learning algorithm that incorporates depth and tactile information to create rich and accurate 3D models from single depth images. The models are then able to be used for robotic manipulation tasks. This is accomplished through the use of a 3D convolutional neural network (CNN). Offline, the network is provided with both depth and tactile information and trained to predict the object’s geometry, filling in the occluded regions of the object. At runtime, the network is provided a partial view of an object. The network then produces an initial object hypothesis using depth alone. A grasp is planned using this hypothesis and a guarded move takes place to collect tactile information. The network can then improve the system’s understanding of the object’s geometry by utilizing the newly collected tactile information.

Citation: Jacob Varley, David Watkins, and Peter Allen. “Visual-Tactile Geometric Reasoning (Abstract and Poster)”. In: Data-Driven Manipulation workshop, Robotics: Science and Systems (2017).

Download paper here