Bingjie Tang

I am a PhD student at the University of Southern California, advised by Professor Gaurav S. Sukhatme in the USC Robotic Embedded Systems Laboratory (RESL). During my PhD, I also collaborate closely with Yashraj Narang and Dieter Fox at the Nvidia Seattle Robotics Lab.

Email  /  Google Scholar  /  CV

profile photo

Research

I am broadly interested in robotic manipulation, with a focus on learning sim-to-real transferable skills for contact-rich manipulation and developing tools for efficient and reliable sim-to-real transfer.

AutoMate: Specialist and Generalist Assembly Policies over Diverse Geometries


Bingjie Tang, Iretiayo Akinola, Jie Xu, Bowen Wen, Ankur Handa, Karl Van Wyk, Dieter Fox, Gaurav S. Sukhatme, Fabio Ramos, Yashraj Narang
20th Robotics: Science and Systems (RSS), 2024.
arxiv / pdf / website / blogpost /

A learning framework and system that consists of 4 parts: 1) a dataset of 100 assemblies compatible with simulation and the real world, along with parallelized simulation environments for policy learning, 2) a novel simulation-based approach for learning specialist (i.e., part-specific) policies and generalist (i.e., unified) assembly policies, 3) demonstrations of specialist policies that individually solve 80 assemblies with ≈80%+ success rates in simulation, as well as a generalist policy that jointly solves 20 assemblies with an 80%+ success rate, and 4) zero-shot sim-to-real transfer that achieves similar (or better) performance than simulation, including on end-to-end assembly.

IndustReal: Transferring Contact-Rich Assembly Tasks from Simulation to Reality


Bingjie Tang*, Michael Lin*, Iretiayo Akinola, Ankur Handa, Gaurav Sukhatme, Fabio Ramos, Dieter Fox, Yashraj Narang. (*Equal Contribution)
19th Robotics: Science and Systems (RSS), 2023.
arxiv / pdf / video / code / website / blogpost / dataset /

We present a set of algorithms, systems, and tools that solve assembly tasks in simulation with reinforcement learning (RL) and successfully achieve policy transfer to the real world. Specifically, we propose 1) simulation-aware policy updates, 2) signed-distance-field rewards, and 3) sampling-based curricula for robotic RL agents. We use these algorithms to enable robots to solve contact-rich pick, place, and insertion tasks in simulation. We then propose 4) a policy-level action integrator to minimize error at policy deployment time. We build and demonstrate a real-world robotic assembly system that uses the trained policies and action integrator to achieve repeatable performance in the real world. Finally, we present hardware and software tools that allow other researchers to fully reproduce our system and results.

Selective Object Rearrangement in Clutter


Bingjie Tang, Gaurav S. Sukhatme.
6th Annual Conference on Robot Learning (CoRL), 2022
pdf / video / website /

An image-based, learned method for selective tabletop object rearrangement in clutter using a parallel jaw gripper. Our method consists of three stages: graph-based object sequencing (which object to move), feature-based action selection (whether to push or grasp, and at what position and orientation) and a visual correspondence-based placement policy (where to place a grasped object).

Learning Collaborative Push and Grasp Policies in Dense Clutter


Bingjie Tang, Matthew Corsaro, George Konidaris, Stefanos Nikolaidis, Stefanie Tellex
IEEE International Conference on Robotics and Automation (ICRA), 2021
pdf / video /

Robots must reason about pushing and grasping in order to engage in flexible manipulation in cluttered environments. We train a robot to learn joint planar pushing and 6-degree-of-freedom (6-DoF) grasping policies by self-supervision. With collaborative pushes and expanded grasping action space, our system can deal with cluttered scenes with a wide variety of objects (e.g. grasping a plate from the side after pushing away surrounding obstacles).








Design and source code from Jon Barron's website