Bingjie Tang

I am a PhD student at the Computer Science Department, part of the Viterbi School of Engineering at University of Southern California, where I work on robot manipulation. My PhD advisor is Gaurav S. Sukhatme. During my PhD, I am very lucky to collaborate with Yashraj Narang and Dieter Fox at Nvidia Seattle Robotics Lab as a research intern.

I have an MS in Computer Science from Brown University, where I was a research assistant for George Konidaris and Stefanie Tellex. I received a bachelor degree in CS from Huazhong University of Science and Technology, and I did my final year project at Shanghai Jiao Tong University with Hai Zhao.

Email  /  GitHub  /  Google Scholar  /  CV

profile photo

Research

I'm interested in robot manipulation, task and motion planning and multi-modality perception for robots.

IndustReal: Transferring Contact-Rich Assembly Tasks from Simulation to Reality


Bingjie Tang*, Michael Lin*, Iretiayo Akinola, Ankur Handa, Gaurav Sukhatme, Fabio Ramos, Dieter Fox, Yashraj Narang. (*Equal Contribution)
19th Robotics: Science and Systems (RSS), 2023.
pdf / website /

We present a set of algorithms, systems, and tools that solve assembly tasks in simulation with reinforcement learning (RL) and successfully achieve policy transfer to the real world. Specifically, we propose 1) simulation-aware policy updates, 2) signed-distance-field rewards, and 3) sampling-based curricula for robotic RL agents. We use these algorithms to enable robots to solve contact-rich pick, place, and insertion tasks in simulation. We then propose 4) a policy-level action integrator to minimize error at policy deployment time. We build and demonstrate a real-world robotic assembly system that uses the trained policies and action integrator to achieve repeatable performance in the real world. Finally, we present hardware and software tools that allow other researchers to fully reproduce our system and results.

Selective Object Rearrangement in Clutter


Bingjie Tang, Gaurav S. Sukhatme.
6th Annual Conference on Robot Learning (CoRL), 2022
pdf / website /

An image-based, learned method for selective tabletop object rearrangement in clutter using a parallel jaw gripper. Our method consists of three stages: graph-based object sequencing (which object to move), feature-based action selection (whether to push or grasp, and at what position and orientation) and a visual correspondence-based placement policy (where to place a grasped object).

Learning Multi-action Tabletop Rearrangement Policies


Bingjie Tang, Gaurav S. Sukhatme

pdf / video /

The ability to rearrange a physical environment depends to varying degrees on perceptual skill, the ability to navigate unstructured environments, manipulate objects effectively, and long-horizon task planning. We propose a feature-based method that jointly learns two action primitives and a rearrangement planning policy in a table-top setting. Two separate fully-connected networks map visual observations to actions and another deep neural network learns rearrangement planning conditioned on the goal specification, perceptual input and selected action primitive.

Learning Collaborative Push and Grasp Policies in Dense Clutter


Bingjie Tang, Matthew Corsaro, George Konidaris, Stefanos Nikolaidis, Stefanie Tellex
IEEE International Conference on Robotics and Automation (ICRA), 2021
pdf / video /

Robots must reason about pushing and grasping in order to engage in flexible manipulation in cluttered environments. We train a robot to learn joint planar pushing and 6-degree-of-freedom (6-DoF) grasping policies by self-supervision. With collaborative pushes and expanded grasping action space, our system can deal with cluttered scenes with a wide variety of objects (e.g. grasping a plate from the side after pushing away surrounding obstacles).








Design and source code from Jon Barron's website