Haokun Wang

I received my Ph.D. degree in Robotics and Autonomous Systems from HKUST in 2024, working with Prof. Shaojie Shen and Prof. Michael Yu Wang. My research interests include various aspects of robotics science and systems, mainly focusing on safe and precise robot motion.

In the early stages, my research primarily revolved around utilizing tactile and visual perception in robot arms to accomplish dexterous manipulation tasks, such as adaptive grasping and peg-in-hole assembly. More recently, I am particularly interested in developing motion planning and control methods that can guarantee safety and precision in complex and uncertain environments, especially in the scenarios where impact is inevitable or intentional.

I recieved my bachelor' degree from the Department of CSE, SUSTech in 2019. Subsequently, I worked as a research assistant at BionicDL Lab for a year.

Email  /  GitHub  /  Google Scholar  /  LinkedIn

profile photo

Preprint




Journals & Magazines

project image

Impact-Aware Motion Planning and Control for Aerial Robots with Suspended Payloads


H. Wang$^{+}$, H. Li$^{+}$, B. Zhou, F. Gao, and S. Shen

accepted by Transections on Robotics (T-RO), 2024
website / video / code /

A quadrotor with a cable-suspended payload system has dual motion modes, depending on whether the cable is slack or not, and present complicated dynamics. In this work, we propose a novel impact-aware planning and control framework that resolves potential impacts caused by motion mode switching.

project image

AutoTrans: A Complete Planning and Control Framework for Autonomous UAV Payload Transportation


H. Li, H. Wang, C. Feng, F. Gao, B. Zhou, and S. Shen

IEEE Robotics and Automation Letters (RA-L), 2023
paper / video / code /

We present a real-time planning method that generates smooth trajectories for unmanned aerial vehicles with suspended payloads and design an adaptive NMPC to overcome unknown external perturbations and inaccurate model parameters.

project image

Real-Time Collision-Free Grasp Pose Detection with Geometry-Aware Refinement Using High-Resolution Volume


J. Cai, J. Cen, H. Wang, and M. Y. Wang

IEEE Robotics and Automation Letters (RA-L), 2022
paper / website /

In this letter, we propose a novel vision-based grasp system for closed-loop 6-degrees of freedom grasping of unknown objects in cluttered environments.

project image

Deepclaw 2.0: A data collection platform for learning human manipulation


H. Wang, X. Liu, N. Qiu, N. Guo, F. Wan, and C. Song

Frontiers in Robotics and AI, 2022
paper / code /

This paper proposes DeepClaw 2.0 as a low-cost, open-sourced data collection platform for learning human manipulation. We developed an intuitive interface that converts the raw sensor data into state-action data for imitation learning problems and further demonstrated our dataset’s potential by using real robotic hardware or a simulated environment.

project image

A Reconfigurable Design for Omni-Adaptive Grasp Learning


F. Wan, H. Wang, J. Wu, Y. Liu, S. Ge, and C. Song

IEEE Robotics and Automation Letters (RA-L), 2020
paper /

In this letter, we investigate how learning method can be used to support the design reconfiguration of robotic grippers for grasping using a novel soft structure with omni-directional adaptation.

project image

Rigid-Soft Interactive Learning for Robust Grasping


L. Yang, F. Wan, H. Wang, X. Liu, Y. Liu, J. Pan, and C. Song

IEEE Robotics and Automation Letters (RA-L), 2020
paper /

Inspired by widely used soft fingers on grasping, we propose a method of rigid-soft interactive learning, aiming at reducing the time of data collection. We find experimental evidence that the interaction types between grippers and target objects play an essential role in the learning methods.




Conferences

project image

Jigsaw-based Benchmarking for Learning Robotic Manipulation


X. Liu, F. Wan, S. Ge, H. Wang, H. Sun, and C. Song

IEEE International Conference on Advanced Robotics and Mechatronics (ICARM), 2023
paper /

In this paper, we propose a method to benchmark metrics of robotic manipulation, which addresses the spatial-temporal reasoning skills for robot learning with the jigsaw game.

project image

DeepClaw: A Robotic Hardware Benchmarking Platform for Learning Object Manipulation


F. Wan, H. Wang, X. Liu, L. Yang, and C. Song

IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), 2020
paper / code /

We present DeepClaw as a reconfigurable benchmark of robotic hardware and task hierarchy for robot learning. We provide a detailed design of the robot cell with readily available parts to build the experiment environment that can host a wide range of robotic hardware.


Design and source code from Leonid Keselman's website and Jon Barron's website