Details on implementing a robot manipulator to grasp objects using point cloud data. Read the report for more details.
- Aubo-i5 robot
- Asus Xtion Pro RGB-D camera
- All is done in Ubuntu 16.04
- OpenRave
- (GPD) package for generating grasp poses point cloud data
- TrajOpt
- Some installation notes
-
Camera calibration:
-
RGB calibration: tutorial
-
Depth calibration: jsk_pcl_ros
The depth sensor of the Asus camera has significant error (5 cm at 50 cm distance), therefore it is very important to calibrate this sensor properly. I detected the error when using the Aruco tag, the coordinate of the Aruco tag is 5 cm behind the point cloud data of the tag. After calibration, the distance is about 1 cm which is sufficiently good enough. The above package tried to align depth estimation from RGB images and the depth output from the sensor. It can be installed throughapt-get install
.
It is also important to move the checkerboard to cover as much space as possible. You also should focus on taking samples in your working space as well. Improperly calibration will result in skewed point cloud which is not useful.
-
-
Coordinate transformation:
For planning the robot to go to the desired location to grasp, we need to transform whatever the camera sees to the robot coordinate. That is the transformation matrix from the camera_optical_coordinate to the robot_base_coordinate. I used Aruco tag as the intermediate step.-
The Aruco_coordinate (displayed in the above figure) can be computed easily compared to the camera_optical_coordinate.
-
As the tag is sticked on the end-effector in a known way, the relationship between Aruco_coordinate and the end_effector_coordinate is also determined.
-
The forward kinematics gives you the last missing piece, the relationship between the end_effector_coordinate and the robot_base_coordinate.
-
I found it very useful for error detection to publish all the coordinates in rviz using tf_static_publisher from ROS. I created a launch file for that purpose. An example is like the below code when I can determine the robot_base_frame from the end_effector_frame using the robot kinematics.
<launch> <node pkg="tf" type="static_transform_publisher" name="robot_base_broadcaster" args=" -0.214593534602693 -0.629495407290422 -0.215495156750654 1.616801510011362 -1.303983886960539 1.570780925228566 end_effector_frame robot_base_frame 100" /> </launch>
-
When having all the coordinates, to compute the transformation matrix from the optical_camera_coordinate to the robot_base_frame, we can just use the lookupTransform function. There is also a github repo doing the same thing.
-
-
Prepare OpenRave environment: OpenRave is the environment that the motion planner (TrajOpt) used to plan the collision-free trajectories. It will contain: the robot model, the environment model (the point cloud data that the camera sees, transformed to the robot coordinate).
- Aubo-i5 robot model: In this link, an urdf model of the robot is stored. However, the model needs to be converted to .dae file to be used by OpenRave. Example command for the conversion (the ros workspace needs to be compiled first so that the system knows where the file is)
rosrun collada_urdf urdf_to_collada <input-urdf> <output.dae>
. The output .dae file can be loaded in OpenRave using the commandopenrave aubo_i5_full.dae
. - The transformed point cloud data: As we already have the transformation matrix from the camera_optical_coordinate to the robot_base_coordinate. It is quite straight-forward to convert the point cloud data in the robot_base_coordinate. The example code is in this line. In this, the transformation matrix is stored in a text file. In this script, the point cloud data is saved in .pcd format as an input.
- Environment file: This follows OpenRave xml format. Basically, it will load the robot and do some camera view settings. It is important however, to define a manipulator in this file, with the defined base and end-effector as the manipulator name will be used in the constraints of OpenRave.
- Loading everything together:
- Grasp generation: Given the point cloud data of the selected object is determined, GPD can be used to generated grasp on the object. Filtering is also applied to give priority of the poses that are closer to the orientation of the gripper.
- Aubo-i5 robot model: In this link, an urdf model of the robot is stored. However, the model needs to be converted to .dae file to be used by OpenRave. Example command for the conversion (the ros workspace needs to be compiled first so that the system knows where the file is)
-
Grasp execution: Given the point cloud data of objects, the desired grasp pose, and the model of the robot, TrajOpt can generate collision-free trajectories guiding the robot to the desired gripper. The trajectory is then replayed in the real robot.
-
Results: