Pick and Place Using MoveIt 2 and Perception – ROS 2 Jazzy

In this tutorial, we will use the MoveIt Task Constructor for ROS 2 to carry out a pick and place task. Here is what you will build by the end of this tutorial:

pick-place-demo-rviz-600-fast
pick-place-gazebo-600-fast

We’ll use a depth camera to dynamically detect and locate objects in the scene.

By the end of this tutorial, you’ll have created a vision-enhanced pick and place system that demonstrates the power of combining perception with sophisticated motion planning.

Here is what skills you will learn in this tutorial:

  1. Learn how to integrate a simulated depth camera into Gazebo
  2. Learn how to process point cloud data to detect and locate objects.
    • A point cloud is a collection of data points in 3D space that represent the surface of an object or environment.
  3. Learn how to dynamically update the MoveIt planning scene based on visual information
  4. Learn how to modify the MoveIt Task Constructor pipeline to use real-time object poses (positions and orientations)

Here is a high-level overview of what our enhanced program will do:

  1. Set up the demo scene with randomly placed objects
  2. Acquire and process point cloud data from the depth camera
  3. Detect and localize objects in the scene
  4. Update the planning scene with the detected objects
  5. Define a pick sequence that includes:
    • Opening the gripper
    • Moving to a visually determined pre-grasp position
    • Approaching the target object
    • Closing the gripper
    • Lifting the object
  6. Define a place sequence that includes:
    • Moving to a designated place location
    • Lowering the object
    • Opening the gripper
    • Retreating from the placed object
  7. Plan the entire pick and place task using the updated scene information
  8. Optionally execute the planned task
  9. Provide detailed feedback on each stage of the process, including visual perception results

Prerequisites

All the code is here on my GitHub repository. Note that I am working with ROS 2 Jazzy, so the steps might be slightly different for other versions of ROS 2.

Create a Package

Let’s create a package to store our code.

cd ~/ros2_ws/src/mycobot_ros2/
ros2 pkg create \
  --build-type ament_cmake \
  --dependencies \
    generate_parameter_library \
    libpcl-all-dev \
    moveit_common \
    moveit_core \
    moveit_ros_planning \
    moveit_ros_planning_interface \
    moveit_task_constructor_core \
    moveit_task_constructor_msgs \
    pcl_conversions \
    pcl_ros \
    rclcpp \
    sensor_msgs \
    shape_msgs \
    tf2_eigen \
    tf2_geometry_msgs \
  --license BSD-3-Clause \
  --maintainer-name ubuntu \
  --maintainer-email automaticaddison@todo.com \
  --description "Pick and place demo using the MoveIt Task Constructor for motion planning and the Point Cloud Library for perception." \
  mycobot_mtc_pick_place_demo
cd ~/ros2_ws/
rosdep install --from-paths src --ignore-src -r -y

Type in your password, and install any missing dependencies.

Now build.

colcon build && source ~/.bashrc

Don’t worry about any build errors at this stage. We will fix those later.

Create a Launch File 

Now let’s create some launch files. Here are all the launch files. You will add the code for each launch file below.

cd ~/ros2_ws/src/mycobot_ros2/mycobot_mtc_pick_place_demo/

Create a folder named launch.

mkdir launch && cd launch

Add your first launch file: 

touch pick_place_demo.launch.py

This launch file launches the MoveIt Task Constructor node responsible for pick and place with perception.

touch point_cloud_viewer.launch.py

This launch file is created for viewing point cloud data (.pcd files) in RViz. It starts a node to convert PCD files to point clouds, allows configuration of file paths and publishing intervals, and launches RViz with a specific configuration for visualizing the point cloud data.

Save the file, and close it. 

touch get_planning_scene_server.launch.py 

This launch file starts the GetPlanningSceneServer node (we’ll go over this service in detail later in this tutorial), which is responsible for providing the current planning scene. It loads configuration parameters from a YAML file.

Save the file, and close it.

Install the Packet Capture Library

Open a terminal window, and type:

sudo apt-get install libpcap-dev

Create a Parameter File

Now let’s add some parameters.

cd ~/ros2_ws/src/mycobot_ros2/mycobot_mtc_pick_place_demo/
mkdir config && cd config
touch mtc_node_params.yaml 

Add the code, and save.

This YAML file contains configuration parameters for the MoveIt Task Constructor node. It includes settings for robot control, object manipulation, motion planning, and various timeout and scaling factors. These parameters define the behavior of the pick and place task, including grasp generation, approach distances, and cartesian motion settings.

Save the file, and close it. 

touch get_planning_scene_server.yaml 

This configuration file sets parameters for the GetPlanningSceneServer node. It includes settings for point cloud processing, plane and object segmentation, support surface detection, and various thresholds for filtering and clustering. These parameters are for processing sensor data and creating an accurate representation of the planning scene.

Save the file, and close it. 

Add the Source Code

Below I will guide you through how I set up the source code. If you want to learn more details about the implementation of each piece of code, go to the Appendix.

cd ~/ros2_ws/src/mycobot_ros2/mycobot_mtc_pick_place_demo/src 
touch mtc_node.cpp 

Add the code.

This file implements the main MoveIt Task Constructor (MTC) node for the pick and place task. It sets up the planning scene, creates the MTC task with various stages such as move to pick, grasp, lift, and place. The file also handles the execution of the task, coordinating the robot’s movements to complete the pick and place operation.

Save the file, and close it. 

touch cluster_extraction.cpp 

This file contains functions for extracting clusters from a point cloud using a region growing algorithm. It helps in separating different objects in the scene by grouping points that likely belong to the same object. The extracted clusters can then be processed individually for object recognition or manipulation tasks.

Save the file, and close it. 

touch get_planning_scene_client.cpp 

This file implements a test client for the GetPlanningScene service. It’s responsible for requesting the current planning scene, which includes information about objects in the environment. 

Save the file, and close it. 

touch get_planning_scene_server.cpp 

This file implements the server for the GetPlanningScene service. It processes point cloud and RGB image data to generate CollisionObjects for the MoveIt planning scene. These CollisionObjects represent the obstacles and objects in the robot’s environment, allowing for accurate motion planning and object manipulation.

Save the file, and close it. 

touch normals_curvature_and_rsd_estimation.cpp 

This file contains functions to estimate normal vectors, curvature values, and Radius-based Surface Descriptor (RSD) values for each point in a point cloud. These geometric features help in understanding the shape and orientation of surfaces in the scene. The estimated features can be used for tasks such as object recognition, segmentation, and grasp planning.

Save the file, and close it. 

touch object_segmentation.cpp 

This file does most of the heavy lifting. It implements object segmentation for the input 3D point cloud. It includes functions for fitting geometric primitives (cylinders and boxes) to point cloud data, which is used to identify and represent objects in the scene.

Save the file, and close it. 

touch plane_segmentation.cpp 

This file contains functions to segment the support plane and objects from a given point cloud. It identifies the surface on which objects are placed, separating it from the objects themselves. This segmentation is important for tasks such as determining where objects can be placed.

Save the file, and close it.

Add the Include Files

Now let’s create the header files.

cd ~/ros2_ws/src/mycobot_ros2/mycobot_mtc_pick_place_demo/include/mycobot_mtc_pick_place_demo/
touch cluster_extraction.h 

Save the file, and close it.

touch get_planning_scene_client.h 

Save the file, and close it. 

touch normals_curvature_and_rsd_estimation.h 

Save the file, and close it. 

touch object_segmentation.h 

Save the file, and close it. 

gedit plane_segmentation.h 

Save the file, and close it.

These header files define the interfaces for the corresponding source files we created earlier. They contain function declarations, class definitions, and necessary include statements. 

Add Launch Scripts

Let’s add some launch scripts.

cd ~/ros2_ws/src/mycobot_ros2/mycobot_mtc_pick_place_demo/ 
mkdir scripts && cd scripts
touch pointcloud.sh 

Save the file, and close it. 

touch robot.sh 

Save the file, and close it. 

chmod +x pointcloud.sh 
chmod +x robot.sh

These scripts will help you launch the necessary components for viewing point clouds and running the robot simulation with Gazebo, RViz, and MoveIt 2. 

The pointcloud.sh script is designed to launch the robot in Gazebo and then view a specific point cloud file. 

The robot.sh script launches the full setup including the robot in Gazebo, RViz for visualization, and sets up MoveIt 2 for motion planning. The chmod commands at the end make both scripts executable, allowing you to run them directly from the terminal.

Add the RViz Configuration File

Now let’s add the RViz configuration file.

cd ~/ros2_ws/src/mycobot_ros2/mycobot_mtc_pick_place_demo/ 
mkdir rviz && cd rviz
touch point_cloud_viewer.rviz

Add the RViz configuration file.

Save the file, and close it.

Visualize the PointCloud Data in RViz

1-visualize-the-point-cloud

Test Point Cloud Data Generation in Gazebo

We haven’t built our package yet (using “colcon build”), but it is helpful to know how to test point cloud data in Gazebo once everything has been built. 

To confirm the point cloud data is being generated successfully in Gazebo, you would run these commands:

bash ~/ros2_ws/src/mycobot_ros2/mycobot_bringup/scripts/mycobot_280_gazebo_and_moveit.sh
gz topic -l

See the information about a topic:

gz topic -t <topic_name> -i

Echo a topic:

gz topic -t <topic_name> -e

You can run these commands to confirm data is being generated:

ros2 topic echo /camera_head/depth/camera_info 
ros2 topic echo /camera_head/depth/color/points

To see the frequency of publishing, type:

ros2 topic hz /camera_head/depth/camera_info 
ros2 topic hz /camera_head/depth/color/points

To get more information on the topics, type:

ros2 topic info /camera_head/color/image_raw
ros2 topic info /camera_head/depth/color/points

Go to RViz and click the Add button in the Displays panel on the left.

Click the “By topic” tab.

Click /camera_head -> /depth -> /color -> /points -> PointCloud2 so you can see the point cloud in RViz.

Click OK.

You can also see the camera image topic information.

Click /camera_head -> /color -> /image_raw -> Image 

2-see-raw-camera-image-rviz

Press CTRL + C to close everything down.

Create a ROS 2 Service Interface for Point Cloud Processing for MoveIt Planning Scene Generation

Now let’s create a ROS 2 service that processes point cloud data to generate CollisionObjects for a MoveIt planning scene. This service will segment the input point cloud, fit primitive shapes to the segments, and create corresponding CollisionObjects. The service will also provide the necessary data for subsequent grasp generation should you decide to use a grasp generation strategy other than the one we will implement in this tutorial.

Here is a description of the service on a high level:

Input (Request)

  • std::string: Target object shape (e.g., “cylinder”, “box”)
  • std::vector<double>: Approximate target object dimensions (for identification)

Output (Response)

  • moveit_msgs::msg::PlanningSceneWorld: Contains CollisionObjects for all detected objects
  • sensor_msgs::msg::PointCloud2: Full scene point cloud
  • sensor_msgs::msg::Image: RGB image of the scene 
  • std::string: ID of the target object in the PlanningSceneWorld
  • std::string: ID of the support surface in the PlanningSceneWorld
  • bool: Success flag

Create a Package

Let’s create a new package called mycobot_interfaces to store our custom service definition. This package will be used across your mycobot projects for custom message and service definitions.

Here is the full package.

Navigate to your mycobot workspace:

cd ~/ros2_ws/src/mycobot_ros2

Create the new package:

ros2 pkg create \
    --build-type ament_cmake \
    --dependencies moveit_msgs sensor_msgs rclcpp \
    --license BSD-3-Clause \
    --maintainer-name ubuntu \
    --maintainer-email automaticaddison@todo.com \
    --description "Service definitions for generating MoveIt planning scenes from point cloud data, including segmentation and primitive shape fitting for CollisionObjects" \
    mycobot_interfaces

Navigate into the new package:

cd mycobot_interfaces

Create a srv directory for our service definitions:

mkdir srv

Update the package.xml file:

gedit package.xml

Make it look like what is on Github.

Save the file, and close it.

Update the CMakeLists.txt file:

gedit CMakeLists.txt

Make it look like what is on GitHub.

Just comment out these lines for now.

#rosidl_generate_interfaces(${PROJECT_NAME}
#  "srv/GetPlanningScene.srv"
#  DEPENDENCIES moveit_msgs sensor_msgs
#)

Save the file, and close it.

Build the package to ensure everything is set up correctly:

cd ~/ros2_ws
colcon build --packages-select mycobot_interfaces
source ~/.bashrc

Create the Custom Service Interface

Now that we have our mycobot_interfaces package set up, let’s create the custom service interface for our planning scene generation service.

Navigate to the srv directory in the mycobot_interfaces package:

cd ~/ros2_ws/src/mycobot_ros2/mycobot_interfaces/srv

Create a new file for our service definition:

touch GetPlanningScene.srv

Add this content to define our service interface:

Save and close the file.

Uncomment these lines on the CMakeLists.txt.

#rosidl_generate_interfaces(${PROJECT_NAME}
#  "srv/GetPlanningScene.srv"
#  DEPENDENCIES moveit_msgs sensor_msgs
#)

Save the file.

cd ~/ros2_ws
colcon build --packages-select mycobot_interfaces
source ~/.bashrc

Confirm Your Custom Interface

After creating our custom service interface, it’s important to verify that it has been created correctly. Follow these steps to confirm the interface creation:

Open a new terminal.

Navigate to your workspace:

cd ~/ros2_ws

Use the ros2 interface show command to display the content of our newly created service:

ros2 interface show mycobot_interfaces/srv/GetPlanningScene

Remember, whenever you make changes to your interfaces, you need to rebuild the package and source your workspace again for the changes to take effect.

Now you have created a custom service interface for planning scene generation. This service will take a target shape and dimensions as input, and return a planning scene world, full point cloud, RGB image, target object ID, support surface ID (e.g. a table), and a success flag.

To use this service in other packages, add mycobot_interfaces as a dependency in the package.xml of the mycobot_mtc_pick_place_demo package where you want to use this service:

<depend>mycobot_interfaces</depend>

In your C++ code for the get_planning_scene_server, you would include the generated header:

#include <mycobot_interfaces/srv/get_planning_scene.hpp>

You can then create service clients or servers using this interface.

This custom service interface provides a clear contract for communication between your point cloud processing node and other nodes in your system that need planning scene information.

Edit CMakeLists.txt

cd ~/ros2_ws/src/mycobot_ros2/mycobot_mtc_pick_place_demo/
gedit CMakeLists.txt

Make sure your CMakeLists.txt looks like this.

Save the file, and close it.

Edit package.xml

cd ~/ros2_ws/src/mycobot_ros2/mycobot_mtc_pick_place_demo/
gedit package.xml

Add this code.

Save the file, and close it.

Build the Code

Let’s build the code now. We will do this in stages due to all the developer warnings I encountered when trying to build the mycobot_mtc_pick_place_demo package the first time I did this.

First build all packages other then the mycobot_mtc_pick_place_demo package.

cd ~/ros2_ws/
colcon build --packages-skip mycobot_mtc_pick_place_demo
source ~/.bashrc 

(OR source ~/ros2_ws/install/setup.bash if you haven’t set up your bashrc file to source your ROS distribution automatically with “source ~/ros2_ws/install/setup.bash”)

Now build the mycobot_mtc_pick_place_demo package:

colcon build --packages-select mycobot_mtc_pick_place_demo --cmake-args -Wno-dev

Without the Wno-dev flag, you would see this warning:

CMake Warning (dev) at /usr/lib/x86_64-linux-gnu/cmake/pcl/Modules/FindFLANN.cmake:45 (find_package):
  Policy CMP0144 is not set: find_package uses upper-case <PACKAGENAME>_ROOT
  variables.  Run "cmake --help-policy CMP0144" for policy details.  Use the
  cmake_policy command to set the policy and suppress this warning.

  CMake variable FLANN_ROOT is set to:

    /usr

  For compatibility, find_package is ignoring the variable, but code in a
  .cmake module might still use it.
Call Stack (most recent call first):
  /usr/lib/x86_64-linux-gnu/cmake/pcl/PCLConfig.cmake:261 (find_package)
  /usr/lib/x86_64-linux-gnu/cmake/pcl/PCLConfig.cmake:306 (find_flann)
  /usr/lib/x86_64-linux-gnu/cmake/pcl/PCLConfig.cmake:570 (find_external_library)
  CMakeLists.txt:55 (find_package)
This warning is for project developers.  Use -Wno-dev to suppress it.

Just ignore it. Build the workspace again.

source ~/.bashrc
colcon build
source ~/.bashrc

Open a terminal window, and type the following command:

sudo sed -i 's/^\(\s*\)PCL_ERROR ("\[pcl::SampleConsensusModelPlane::isSampleGood\] Sample points too similar or collinear!\\n");/\1\/\/ PCL_ERROR ("[pcl::SampleConsensusModelPlane::isSampleGood] Sample points too similar or collinear!\\n");/' $(find /usr/include/pcl* -path "*/sample_consensus/impl/sac_model_plane.hpp")

This command comments out logging by the PCL library that happens during RANSAC. It silences this annoying warning:

[pcl::SampleConsensusModelPlane::isSampleGood] Sample points too similar or collinear!

Build one more time.

colcon build && source ~/.bashrc

Launch the Code

Finally…the moment you have been waiting for. Time to launch the code.

Let’s add two aliases. Open a terminal, and type these commands:

echo "alias pointcloud='bash ~/ros2_ws/src/mycobot_ros2/mycobot_mtc_pick_place_demo/scripts/pointcloud.sh'" >> ~/.bashrc
echo "alias pick='bash ~/ros2_ws/src/mycobot_ros2/mycobot_mtc_pick_place_demo/scripts/robot.sh'" >> ~/.bashrc

Launch the entire demo:

pick

Ignore the message that looks like this:

[Err] [Physics.cc:1773] Attempting to create a mimic constraint for joint [gripper_base_to_gripper_left2] but the chosen physics engine does not support mimic constraints, so no constraint will be created.

You can also ignore this warning:

Warning: class_loader.impl: SEVERE WARNING!!! A namespace collision has occurred with plugin factory for class rviz_default_plugins::displays::InteractiveMarkerDisplay. New factory will OVERWRITE existing one. This situation occurs when libraries containing plugins are directly linked against an executable (the one running right now generating this message). Please separate plugins out into their own library or just don't link against the library and use either class_loader::ClassLoader/MultiLibraryClassLoader to open.

Here is what you would see (I disabled the Marker Array in the Displays panel):

3-pick-place-here-what-should-see

Camera title angle makes a big difference in the quality of the MoveIt planning scene. You can experiment with different values by modifying your URDF for the Intel RealSense camera.

Also, if you attempt to execute the plan, and your robot stalls as it tries to pick up the cylinder, experiment with different physics engine plugins for the SDF file for the Gazebo world.

4-gazebo-pick-place

If you want to visualize the raw 3D point cloud data, you can run the pointcloud.sh script. Make sure the files are in the appropriate location.

pointcloud

That’s it!

Deep Learning Alternatives for MoveIt Planning Scene Generation

In the service we developed to generate the collision objects for the planning scene, we fit primitive shapes to the 3D point cloud generated by our RGBD camera. We could have also generated the MoveIt planning scene using modern deep learning methods (I won’t go through this in this tutorial).

For example, you could use a package like isaac_ros_foundationpose to create 3D bounding boxes around objects in the scene and then add those boxes as collision objects. The advantage of this technique is that you have the pose of the object as well as the class information (e.g. mug, plate, bowl, etc.)

Here is what a Detection3D.msg message would look like if you were to type ‘ros2 topic echo <name_of_detection3d_topic>’:

---
header:
  stamp:
    sec: 1694627500
    nanosec: 500000000
  frame_id: kitchen_camera
results:
  - class_id: mug
    score: 0.95
  - class_id: plate
    score: 0.03
  - class_id: bowl
    score: 0.02
bbox:
  center:
    position:
      x: 0.5
      y: 1.2
      z: 0.8
    orientation:
      x: 0.0
      y: 0.0
      z: -0.7071
      w: 0.7071
  size:
    x: 0.1
    y: 0.1
    z: 0.15
id: kitchen_mug_2

Appendix: ROS 2 Service: Generating a MoveIt Planning Scene from a 3D Point Cloud 

Overview

The method I used for generating the planning scene was inspired by the following paper: 

Goron, Lucian Cosmin, et al. “Robustly segmenting cylindrical and box-like objects in cluttered scenes using depth cameras.” ROBOTIK 2012; 7th German Conference on Robotics. VDE, 2012.

If you get a chance, I highly recommend you read this entire paper. It provides a robust methodology for generating object primitives (i.e. boxes and cylinders) from a 3D point cloud scene even if the depth camera can only see the objects partially (i.e. from one side).

segmentation-process-goron-et-al
Image Source: Goron, Lucian Cosmin, et al. “Robustly segmenting cylindrical and box-like objects in cluttered scenes using depth cameras.” ROBOTIK 2012; 7th German Conference on Robotics. VDE, 2012.

Let’s go through the technical details of the ROS 2 service we created. The purpose of the service is to process point cloud and RGB image data to generate CollisionObjects for a MoveIt planning scene. I will cover how we segmented the input point cloud, fit primitive shapes to the segments, created corresponding CollisionObjects, and provided the necessary data for subsequent grasp generation.

Service Definition

Name: get_planning_scene

Input (Request)

  • std::string: Target object shape (e.g., “cylinder”, “box”)
  • std::vector<double>: Approximate target object dimensions (for identification)

Output (Response)

  • moveit_msgs::msg::PlanningSceneWorld: Contains CollisionObjects for all detected objects
  • sensor_msgs::msg::PointCloud2: Full scene point cloud
  • sensor_msgs::msg::Image: RGB image of the scene 
  • std::string: ID of the target object in the PlanningSceneWorld
  • std::string: ID of the support surface in the PlanningSceneWorld
  • bool: Success flag

Implementation Details 

Point Cloud Preprocessing

Estimate the support plane for the objects in the scene and extract the points in the point cloud that are above the support plane (plane_segmentation.cpp)

The first step of the algorithm identifies the points that make up the flat surface (like a table) that objects are sitting on.

Input
  • Point cloud (pcl::PointCloud<pcl::PointXYZRGB>)
Output
  • Point cloud for the support plane
  • Point cloud for the points above the detected support plane.
Process
  • Estimate surface normals
    • A normal is a vector perpendicular to the surface at a given point. It provides information about the local orientation of the surface
    • For each point in the cloud, estimate the normal by fitting a plane to its k-nearest neighbors. 
    • Store the computed normals, keeping them aligned with the original point cloud. 
  • Identify potential support surfaces
    • Use the computed surface normals to find approximately horizontal surfaces.
    • Group points whose normals are approximately parallel to the world Z-axis (vertical). These points likely belong to horizontal surfaces like tables.
    • Perform Euclidean clustering on these points to get support surface candidate clusters.
    • Store the support surface candidate clusters
  • For each support surface candidate cluster:
    • Use RANSAC to fit a plane model
    • Validate the plane model based on the robot’s workspace limits
      • If cropping is enabled:
        • Check if the plane is within the cropped area: 
      • If cropping is disabled:
        • Skip the position check
      • Check if the plane is close to z=0:
        • Define z_tolerance (e.g., 0.05 meters)
        • Ensure the absolute value of plane_center.z is less than z_tolerance
      • Verify the plane is approximately horizontal:
        • Define up_vector as (0, 0, 1)
        • Calculate dot_product between plane_normal and up_vector
        • Define angle_tolerance based on acceptable tilt (e.g., cos of 2.5 degrees)
        • Ensure dot_product is greater than angle_tolerance
      • If all applicable conditions are met, consider the plane model valid; otherwise, reject it
  • Select the best fitting plane as the support surface
    • From the set of validated plane models, choose the best candidate based on the following criteria:
      • Inlier count:
        • Define inlier_count for each plane model as the number of points that fit the model within a specified distance threshold 
        • Prefer plane models with higher inlier_count, as they represent surfaces with more supporting points
      • Plane size:
        • Calculate the area of each plane model by finding the 2D bounding box of inlier points projected onto the plane
        • Prefer larger planes, as they are more likely to represent the main support surface
      • Distance to z=0:
        • Calculate z_distance as the absolute distance of the plane’s center to z=0 
        • Prefer planes with smaller z_distance
      • Orientation accuracy:
        • Calculate orientation_score as the dot product between the plane’s normal and the up vector (0, 0, 1) 
        • Prefer planes with higher orientation_score (closer to being perfectly horizontal)
    • Combine these factors using a weighted scoring system:
      • Define weights for each factor (e.g., w_inliers, w_size, w_distance, w_orientation) 
      • Calculate a total_score for each plane model 
      • Select the plane model with the highest total_score as the best fitting plane
    • Store the selected best_plane_model for further use in object segmentation
  • Return the results:
    • Create support_plane_cloud:
  • Extract all inlier points from the original point cloud that belong to the best_plane_model
  • Store these points in support_plane_cloud 
  • Create objects_cloud:
    • For each point in the original point cloud:
      • If the point is above the best_plane_model (use the plane equation to check)
      • And if cropping is enabled, the point is within the crop boundaries
      • Add the point to objects_cloud
  • Return both support_plane_cloud and objects_cloud

References:

  • R. Mario and V. Markus, “Grasping of Unknown Objects from a Table Top,” in Workshop on Vision in Action: Efficient strategies for cognitive agents in complex environments, 2008.
  • R. B. Rusu, N. Blodow, Z. C. Marton, and M. Beetz, “Close-range Scene Segmentation and Reconstruction of 3D Point Cloud Maps for Mobile Manipulation in Human Environments,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), St. Louis, MO, USA, October 2009.

Estimate normal vectors, curvature values, and radius descriptor values for each point in the point cloud (normals_curvature_and_rsd_estimation.cpp)

Input
  • Point cloud (of type pcl::PointCloudpcl::PointXYZRGB)
Output
  • Point Cloud with Normal Vectors, Curvature Values, and Radius Descriptor Values
    • Each point has an associated 3D vector representing its normal vector (a 3D vector indicating the direction the surface is facing at that point)
    • Each point also has an additional value representing how curved the surface is at that point.
    • Each point has Radius-based Surface Descriptor (RSD) values (the minimum and maximum surface radius that can be fitted to the point’s neighborhood). This step determines whether a point belongs to a linear or circular surface.
Process

This approach provides a method for handling boundary points and refining normal estimation using MaximumLikelihoodSampleConsensus (MLESAC). It estimates normals, curvature, and RSD values for every point, using a more conservative approach for points identified as potentially being on a boundary.

For each point p in the cloud:

  1. Find neighbors:
    • Identify the k nearest neighboring points around p (k_neighbors as input parameter)
    • These k points form the “neighborhood” of p.
    • Check if the point is a boundary point by seeing if it has fewer than k_neighbors neighbors.
    • For boundary points, adjust the neighborhood size to use a smaller number of neighbors (up to min_boundary_neighbors or whatever is available).
  2. Estimate the normal vector:
    • Perform initial Principal Component Analysis (PCA) on the neighborhood:
      1. This captures how the neighborhood points are spread out around point p.
      2. The eigenvector corresponding to the smallest eigenvalue is an initial normal estimate.
    • Use Maximum Likelihood Estimation SAmple Consensus (MLESAC) to refine the local plane estimate:
      1. MLESAC aims to refine the plane fitting by robustly estimating the best plane model and discarding outliers.
      2. It uses the initial normal estimate as a starting point.
      3. If MLESAC fails, fall back to the initial normal estimate.
    • Compute a weighted covariance matrix:
      1. Assign weights to neighbor points based on their distance to the estimated local plane
      2. Points closer to the plane and inliers from MLESAC get higher weights
      3. This step helps to reduce the influence of outliers
    • Perform PCA on this weighted covariance matrix:
      1. Compute eigenvectors and eigenvalues
      2. The eigenvector corresponding to the smallest eigenvalue is the final normal estimate
  3. Compute curvature values:
    • Use the eigenvalues from PCA on the weighted covariance matrix to calculate the curvature values according to the following formula: curvature = λ₀ / (λ₀ + λ₁ + λ₂) where λ₀ ≤ λ₁ ≤ λ₂
  4. Calculate Radius-based Surface Descriptor (RSD) values:
    • Use pcl::RSDEstimation to compute the minimum and maximum surface radius that can be fitted to the point’s neighborhood.
    • This step determines whether a point belongs to a linear or circular surface.
  5. Output creation: Point cloud that includes:
    • The original XYZ coordinates
    • The RGB color information
    • The estimated normal vector for each point
    • The estimated curvature value for each point
    • The RSD values (r_min and r_max) for each point
Key Parameters
  • k_neighbors: Number of nearest neighbors to consider
  • max_plane_error: Maximum allowed error for plane fitting in MLESAC
  • max_iterations: Maximum number of iterations for MLESAC
  • min_boundary_neighbors: Minimum number of neighbors to consider for boundary points
  • rsd_radius: Radius to use for RSD estimation
References
  1. R. B. Rusu, Z. C. Marton, N. Blodow, M. Dolha, and M. Beetz, “Towards 3D Point Cloud Based Object Maps for Household Environments,” Robotics and Autonomous Systems Journal (Special Issue on Semantic Knowledge in Robotics), vol. 56, no. 11, pp. 927–941, 30 November 2008.
  2. Z. C. Marton, D. Pangercic, N. Blodow, and M. Beetz, “Combined 2D-3D Categorization and Classification for Multimodal Perception Systems,” International Journal of Robotics Research, 2011.

Cluster the point cloud into connected components using region growing based on nearest neighbors – cluster_extraction.cpp

Input
  • Point cloud (of type pcl::PointCloud<PointXYZRGBNormalRSD>::Ptr) – Point cloud with XYZ, RGB, normal vectors, curvature values, and Radius-based Surface Descriptor (RSD) values
Output
  • Vector of point cloud clusters, where each cluster is a separate point cloud containing:
    • Points that are close to each other in 3D space.
    • Each point retains its original attributes (xyz, RGB, normal, curvature, RSD values)
Process
  1. Create a KdTree object for efficient nearest neighbor searches
  2. Extract normals from the input cloud into a separate pcl::PointCloudpcl::Normal object
  3. Create a RegionGrowing object and set its parameters:
    • Minimum and maximum cluster size
    • Number of nearest neighbors to consider
    • Smoothness threshold (angle threshold for neighboring normals)
    • Curvature threshold
  4. Apply the region growing algorithm to extract clusters
  5. For each extracted cluster:
    • Create a new point cloud
    • Copy points from the input cloud to the new cluster cloud based on the extracted indices
    • Set the cluster’s width, height, and is_dense properties
Key Points
  • Purpose: Reduce the search space for subsequent segmentation by grouping nearby points
  • Note: In cluttered scenes, objects that are touching or very close may end up in the same cluster
  • This step does not perform final object segmentation, but prepares data for later segmentation steps
  • The algorithm uses both geometric properties (normals) and curvature information for clustering
  • Smoothness threshold is converted from degrees to radians in the implementation
Parameters
  • min_cluster_size: Minimum number of points that a cluster needs to contain
  • max_cluster_size: Maximum number of points that a cluster can contain
  • smoothness_threshold: Maximum angle difference between normals (in degrees, converted to radians internally)
  • curvature_threshold: Maximum difference in curvature between neighboring points
  • nearest_neighbors: Number of nearest neighbors to consider for region growing
Additional Features
  • The implementation calculates and logs various statistics for each cluster:
    • Cluster size
    • Centroid coordinates
    • Average RGB color
    • Curvature statistics (min, 20th percentile, average, median, 80th percentile, max)
    • Average RSD (Radius-based Surface Descriptor) values
References
  • R. B. Rusu, N. Blodow, Z. C. Marton, and M. Beetz, “Close-range Scene Segmentation and Reconstruction of 3D Point Cloud Maps for Mobile Manipulation in Human Environments,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), St. Louis, MO, USA, October 2009.

Segmentation of Clutter – object_segmentation.cpp

The input into this step is the vector of point cloud clusters that was generated during the previous step.

Input
  • Vector of point cloud clusters, where each cluster is a separate point cloud containing:
    • Points that are close to each other in 3D space.
    • Each point retains its original attributes (xyz, RGB, normal, curvature, RSD values)
Output
  • moveit_msgs::CollisionObject[] collision_objects
Outer Loop

The entire process below occurs for each point cloud cluster in the input vector.

Process
Project the Point Cloud Cluster Onto the Surface Plane
  1. For each point (x,y,z,RGB,normal, curvature, RSDmin and RSDmax) in the 3D cluster:
    • Project the point onto the surface plane (assumed to be z=0). This creates a point (x, y).
    • Maintain a mapping between each 2D projected point and its original 3D point
Initialize Two Separate Parameter Spaces
  1. Create a vector to store line models for the Hough transform
    • Each line model stores rho, theta, votes, and inlier count
  2. Create a 3D Hough parameter space for circle models
    • Dimensions: center_x, center_y, radius
Understanding Hough Space

Hough space is a parameter space used for detecting specific shapes or models. Each point in Hough space represents a possible instance of the shape you’re looking for (in this case, circles or lines).

For lines:

For circles:

  • The Hough space is 3D: (center_x, center_y, radius)
  • Each point in this space represents a possible circle in the original image

The voting process in Hough space helps find shapes even if they’re partially hidden or broken in the original image.

Inner Loop (repeated num_iterations times)

RANSAC Model Fitting

  1. Line Fitting
    • Use RANSAC to fit a 2D line to the projected points. This is done to identify potential box-like objects.
  2. Circle Fitting
    • Use RANSAC to fit a 2D circle to the projected points. This is done to identify cylindrical-like objects.

Filter Inliers

For the fitted models from the RANSAC Model Fitting step, apply a series of filters to refine the corresponding set of inlier points.

Circle Filtering

  1. Euclidean Clustering
    • Use pcl::EuclideanClusterExtraction to group inliers into clusters.
    • Accept models with a maximum of two clusters (representing complete circles or two visible arcs of the same cylinder).
    • Reject models with more than the specified maximum number of clusters.
  2. Height Consistency (for two clusters only)
    • For models with exactly two clusters, check if the height difference between clusters is within the specified tolerance.
    • This ensures that the fitted circle represents a cross-section of a single, upright cylindrical object.
  3. Curvature Filtering
    • Keep inlier points with high curvature (above the specified threshold).
    • Remove inlier points with low curvature.
  4. Radius-based Surface Descriptor (RSD) Filtering
    • Compare the minimum surface radius (r_min) of each inlier point to the radius of the fitted circle.
    • Keep points where the difference is within the specified tolerance.
  5. Surface Normal Filtering
    • Calculate the angle between the point’s normal (projected onto the xy-plane) and the vector from the circle center to the point.
    • Keep points where this angle is within the specified threshold.

Line Filtering

  1. Euclidean Clustering
    • Use pcl::EuclideanClusterExtraction to group inliers into clusters.
    • Accept models with only one cluster.
    • Reject models with more than the specified maximum number of clusters.
  2. Curvature Filtering
    • Keep inlier points with low curvature (below the specified threshold).
    • Remove inlier points with high curvature.

Model Validation

For both circle and line models, check how many inlier points remain after filtering.

  • If the number of remaining inliers for the model exceeds the threshold:
    • The model is considered valid.
    • Add the model to the appropriate Hough parameter space.
  • If the number of remaining inliers is below the threshold:
    • The model is rejected.

An additional validation step compares inlier counts between circle and line models, keeping only the model type with more inliers.

Add Model to the Hough Space

If a model is valid:

  • For circles:
    • Add a vote to the 3D Hough space (center_x, center_y, radius bins)
  • For lines:
    • Add the line model (rho, theta, votes, inlier count) to the line models vector
    • rho is the perpendicular distance from the origin (0, 0) to the line
    • theta is the angle formed between the x-axis and this perpendicular vector (positive theta is counter-clockwise measured from x-axis)

Remove inliers and continue

Remove the inliers of valid models from the working point cloud and continue the inner loop until insufficient points remain or no valid models are found.

Cluster Parameter Spaces

After all iterations on a point cloud cluster:

  1. Cluster line models based on similarity in rho and theta.
  2. Cluster circle models in the 3D Hough space.
Select Model with Most Votes

Compare the top line cluster with the top circle cluster. Select the model type (line or circle) with the highest vote count.

Estimate 3D Shape

Using the parameters from the highest-vote cluster, fit the selected solid geometric primitive model type (box or cylinder) to the original 3D point cloud data.

Cylinder Fitting

  1. Use the 2D circle fit for radius and (x,y) center position.
  2. Set cylinder bottom at z=0.
  3. Set top height to the highest point in the cluster.
  4. Calculate dimensions and position of the cylinder.

Box Fitting

  1. Compute box orientation from the line angle.
  2. Project points onto the line direction and perpendicular direction to determine length and width.
  3. Use z-values of points to determine height.
  4. Calculate dimensions and position of the box.
Add Shape as Collision Object

The box or cylinder is added as a collision object (moveit_msgs::CollisionObject) to the planning scene with a unique id (e.g. box_0, box_1, cylinder_0, cylinder_1, etc).

Move to Next Cluster

Proceed to the next point cloud cluster in the vector (i.e., move to the next iteration of the Outer Loop).

Key Parameters
  • num_iterations: Number of RANSAC iterations per cluster
  • inlier_threshold: Minimum number of inliers for a model to be considered valid
  • ransac_distance_threshold: Maximum distance for a point to be considered an inlier in RANSAC
  • ransac_max_iterations: Maximum number of iterations for RANSAC
  • circle_min_cluster_size, line_min_cluster_size: Minimum size for Euclidean clusters
  • circle_max_clusters, line_max_clusters: Maximum number of allowed clusters
  • circle_height_tolerance: Maximum allowed height difference between two circle clusters
  • circle_curvature_threshold, line_curvature_threshold: Curvature thresholds for filtering
  • circle_radius_tolerance: Tolerance for RSD filtering
  • circle_normal_angle_threshold: Maximum angle between normal and radial vector for circles
  • circle_cluster_tolerance, line_cluster_tolerance: Distance threshold for Euclidean clustering
  • line_rho_threshold, line_theta_threshold: Thresholds for clustering line models in the parameter space

Get Planning Scene Server (get_planning_scene_server.cpp)

This code brings together all the previously developed components into a single, unified ROS 2 service. It integrates functions for point cloud processing, plane segmentation, object clustering, and shape fitting into a cohesive workflow. The service processes point cloud and RGB image data to generate a MoveIt planning scene, which can be called by a node using the MoveIt Task Constructor to obtain environment information for manipulation tasks. The core functionality is encapsulated in the handleService method of the GetPlanningSceneServer class.

handleService Method Walkthrough

  1. Initialize Response: The method starts by setting the success flag of the response to false.
  2. Check Data Availability: It verifies that both point cloud and RGB image data are available. If either is missing, it logs an error and returns.
  3. Validate Input Parameters and Prepare Point Cloud:
    • Checks if target_shape and target_dimensions are valid
    • Transforms the point cloud to the target frame
    • Applies optional cropping to the point cloud
  4. Convert PointCloud2 to PCL Point Cloud: Converts the ROS PointCloud2 message to a PCL point cloud for further processing.
  5. Segment Support Plane and Objects: Uses the segmentPlaneAndObjects function to separate the support surface from objects in the scene.
  6. Create CollisionObject for Support Surface: Generates a CollisionObject representing the support surface and adds it to the planning scene.
  7. Estimate Normals, Curvature, and RSD: Calls estimateNormalsCurvatureAndRSD to calculate geometric features for each point in the object cloud.
  8. Extract Clusters: Uses extractClusters to identify distinct object clusters in the point cloud.
  9. Get Collision Objects: Calls segmentObjects to convert point cloud clusters into collision objects for the planning scene.
  10. Identify Target Object: Searches through the collision objects to find one matching the requested target shape and dimensions.
  11. Assemble PlanningSceneWorld: Combines all collision objects into a complete PlanningSceneWorld structure.
  12. Fill the Response:
    • Sets the full point cloud, RGB image, target object ID, and support surface ID in the response
    • Sets the success flag to true if all critical steps were successful
    • Logs detailed information about the response contents

This method transforms raw sensor data into a structured planning scene that can be used by the MoveIt Task Constructor for motion planning and manipulation tasks.

Congratulations on reaching the end! Keep building!

Reusing Motion Plans – ROS 2 Jazzy MoveIt Task Constructor

In this tutorial, we’ll explore how to create reusable motion plans for robotic arms using the MoveIt Task Constructor. We’ll build an application from scratch that demonstrates how to define a series of modular movements that can be combined and reused. This approach allows for more flexible and maintainable robot motion planning, especially useful in scenarios where similar motion sequences are repeated or slightly modified.

Here is what you will develop:

modular-moveit-task-constructor

Our application will showcase:

  1. Creation of a reusable module containing a sequence of movements
  2. Combining multiple instances of this module into a larger task
  3. Use of both Cartesian and joint space planning
  4. Integration with ROS 2 and logging of the planning process

By the end of this tutorial, you’ll have a deep understanding of how to structure complex motion plans using the MoveIt Task Constructor, making your robotics applications more modular and easier to maintain.

Here’s a high-level overview of what our program will do:

  1. Define a reusable module that includes:
    • Moving 5 cm in the positive X direction
    • Moving 2 cm in the negative Y direction
    • Rotating -18 degrees around the Z axis
    • Moving to a predefined “ready” position
  2. Create a main task that:
    • Starts from the current state
    • Moves to the “ready” position
    • Executes the reusable module five times in succession
    • Finishes by moving to the “home” position
  3. Plan and execute the task, providing detailed feedback on each stage

Real-World Use Cases

The reusable motion planning approach for robotic arms that you’ll develop in this tutorial has several practical applications:

  • Manufacturing and Assembly
    • Create modular motion sequences for pick-and-place tasks or component assembly
    • Optimize arm movements for repetitive operations, reducing cycle times (Cycle time is the total time it takes to complete one full operation, from start to finish)
  • Bin Picking and Sorting
    • Develop flexible routines for grabbing objects from bins with varying contents
    • Combine basic movement modules to handle different object shapes and orientations
  • Welding and Surface Treatment
    • Build libraries of arm motions for welding or spray painting different part shapes

By mastering these techniques, you’ll be able to create more flexible and efficient robotic arm systems. This modular approach allows you to more efficiently develop and adapt arm motions for various industries.

Prerequisites

All the code is here on my GitHub repository. Note that I am working with ROS 2 Jazzy, so the steps might be slightly different for other versions of ROS 2.

Create the Code

If you don’t already have modular.cpp, open a new terminal window, and type:

cd ~/ros2_ws/src/mycobot_ros2/mycobot_mtc_demos/src/
gedit modular.cpp

Add this code

/**
 * @file modular.cpp
 * @brief Demonstrates the use of MoveIt Task Constructor for robot motion planning.
 *
 * This program creates a reusable task for a robot arm using MoveIt Task Constructor.
 * It defines a series of movements including Cartesian paths and joint space motions.
 *
 * Key Concept:
 *   SerialContainer: This is a type of container in MoveIt Task Constructor that holds
 *     multiple movement stages. These stages are executed in sequence, one after another.
 *     Think of it like a to-do list for the robot, where each item must be completed
 *     before moving on to the next one.
 *
 * @author Addison Sears-Collins
 * @date December 19, 2024
 */

// Include necessary headers
#include <rclcpp/rclcpp.hpp>
#include <moveit/task_constructor/task.h>
#include <moveit/task_constructor/stages/current_state.h>
#include <moveit/task_constructor/solvers/cartesian_path.h>
#include <moveit/task_constructor/solvers/joint_interpolation.h>
#include <moveit/task_constructor/stages/move_to.h>
#include <moveit/task_constructor/stages/move_relative.h>
#include <moveit/task_constructor/stages/connect.h>
#include <moveit/task_constructor/container.h>
#include <moveit/planning_scene/planning_scene.h>

// Use the moveit::task_constructor namespace for convenience
using namespace moveit::task_constructor;

/**
 * @brief Creates a reusable module for robot movement.
 *
 * @param group The name of the robot group to move.
 * @return std::unique_ptr<SerialContainer> A container with a series of movement stages.
 */
std::unique_ptr<SerialContainer> createModule(const std::string& group) {
  // Create a new SerialContainer to hold our movement stages
  auto c = std::make_unique<SerialContainer>("Cartesian Path");
  c->setProperty("group", group);

  RCLCPP_INFO(rclcpp::get_logger("modular_demo"), "Creating module for group: %s", group.c_str());

  // Create solvers for Cartesian and joint space planning
  auto cartesian = std::make_shared<solvers::CartesianPath>();
  auto joint_interpolation = std::make_shared<solvers::JointInterpolationPlanner>();

  // Stage 1: Move 5 cm in the positive X direction
  {
    auto stage = std::make_unique<stages::MoveRelative>("x +0.05", cartesian);
    stage->properties().configureInitFrom(Stage::PARENT, { "group" });
    geometry_msgs::msg::Vector3Stamped direction;
    direction.header.frame_id = "base_link";
    direction.vector.x = 0.05;
    stage->setDirection(direction);
    c->insert(std::move(stage));
    RCLCPP_INFO(rclcpp::get_logger("modular_demo"), "Added stage: Move 5 cm in +X direction");
  }

  // Stage 2: Move 2 cm in the negative Y direction
  {
    auto stage = std::make_unique<stages::MoveRelative>("y -0.02", cartesian);
    stage->properties().configureInitFrom(Stage::PARENT);
    geometry_msgs::msg::Vector3Stamped direction;
    direction.header.frame_id = "base_link";
    direction.vector.y = -0.02;
    stage->setDirection(direction);
    c->insert(std::move(stage));
    RCLCPP_INFO(rclcpp::get_logger("modular_demo"), "Added stage: Move 2 cm in -Y direction");
  }

  // Stage 3: Rotate -18 degrees around the Z axis
  {
    auto stage = std::make_unique<stages::MoveRelative>("rz -18°", cartesian);
    stage->properties().configureInitFrom(Stage::PARENT);
    geometry_msgs::msg::TwistStamped twist;
    twist.header.frame_id = "base_link";
    twist.twist.angular.z = -M_PI / 10.; // 18 degrees in radians
    stage->setDirection(twist);
    c->insert(std::move(stage));
    RCLCPP_INFO(rclcpp::get_logger("modular_demo"), "Added stage: Rotate -18 degrees around Z axis");
  }

  // Stage 4: Move to the "ready" position
  {
    auto stage = std::make_unique<stages::MoveTo>("moveTo ready", joint_interpolation);
    stage->properties().configureInitFrom(Stage::PARENT);
    stage->setGoal("ready");
    c->insert(std::move(stage));
    RCLCPP_INFO(rclcpp::get_logger("modular_demo"), "Added stage: Move to 'ready' position");
  }

  RCLCPP_INFO(rclcpp::get_logger("modular_demo"), "Module creation completed with 4 stages");
  return c;
}

/**
 * @brief Creates the main task for robot movement.
 *
 * @param node The ROS2 node to use for loading the robot model.
 * @return Task The complete task for robot movement.
 */
Task createTask(const rclcpp::Node::SharedPtr& node) {
  Task t;
  t.loadRobotModel(node);
  t.stages()->setName("Reusable Containers");

  RCLCPP_INFO(node->get_logger(), "Creating task: %s", t.stages()->name().c_str());

  // Add the current state as the starting point
  t.add(std::make_unique<stages::CurrentState>("current"));
  RCLCPP_INFO(node->get_logger(), "Added current state as starting point");

  // Define the robot group to move
  const std::string group = "arm";

  // Add a stage to move to the "ready" position
  {
    auto stage = std::make_unique<stages::MoveTo>("move to ready", std::make_shared<solvers::JointInterpolationPlanner>());
    stage->setGroup(group);
    stage->setGoal("ready");
    t.add(std::move(stage));
    RCLCPP_INFO(node->get_logger(), "Added stage: Move to 'ready' position");
  }

  // Add five instances of our reusable module
  // This creates a sequence of movements that the robot will perform,
  // repeating the same set of actions five times in a row.
  RCLCPP_INFO(node->get_logger(), "Adding 5 instances of the reusable module");
  for (int i = 1; i <= 5; ++i) {
    t.add(createModule(group));
    RCLCPP_INFO(node->get_logger(), "Added module instance %d", i);
  }

  // Add a stage to move to the "home" position
  {
    auto stage = std::make_unique<stages::MoveTo>("move to home", std::make_shared<solvers::JointInterpolationPlanner>());
    stage->setGroup(group);
    stage->setGoal("home");
    t.add(std::move(stage));
    RCLCPP_INFO(node->get_logger(), "Added stage: Move to 'home' position");
  }

  RCLCPP_INFO(node->get_logger(), "Task creation completed with 5 module instances");
  return t;
}

/**
 * @brief Main function to set up and execute the robot task.
 *
 * @param argc Number of command-line arguments.
 * @param argv Array of command-line arguments.
 * @return int Exit status of the program.
 */
int main(int argc, char** argv) {
  // Initialize ROS2
  rclcpp::init(argc, argv);
  auto node = rclcpp::Node::make_shared("modular_demo");
  auto logger = node->get_logger();

  RCLCPP_INFO(logger, "Starting modular demo");

  // Start a separate thread for ROS2 spinning
  std::thread spinning_thread([node] { rclcpp::spin(node); });

  // Create and plan the task
  auto task = createTask(node);
  try {
    RCLCPP_INFO(logger, "Starting task planning");

    // Plan the task
    moveit::core::MoveItErrorCode error_code = task.plan();

    // Log the planning result
    if (error_code == moveit::core::MoveItErrorCode::SUCCESS) {
      RCLCPP_INFO(logger, "Task planning completed successfully");
      RCLCPP_INFO(logger, "Found %zu solutions", task.numSolutions());

      // Use printState to log the task state
      std::ostringstream state_stream;
      task.printState(state_stream);
      RCLCPP_INFO(logger, "Task state:\n%s", state_stream.str().c_str());

      // If planning succeeds, publish the solution
      task.introspection().publishSolution(*task.solutions().front());
      RCLCPP_INFO(logger, "Published solution");
    } else {
      RCLCPP_ERROR(logger, "Task planning failed with error code: %d", error_code.val);

      // Use explainFailure to log the reason for failure
      std::ostringstream failure_stream;
      task.explainFailure(failure_stream);
      RCLCPP_ERROR(logger, "Failure explanation:\n%s", failure_stream.str().c_str());
    }

    // Log a simple summary of each stage
    RCLCPP_INFO(logger, "Stage summary:");
    for (size_t i = 0; i < task.stages()->numChildren(); ++i) {
      const auto* stage = task.stages()->operator[](i);
      RCLCPP_INFO(logger, "  %s: %zu solutions, %zu failures",
                  stage->name().c_str(), stage->solutions().size(), stage->failures().size());
    }

  } catch (const InitStageException& ex) {
    RCLCPP_ERROR(logger, "InitStageException caught during task planning: %s", ex.what());
    std::ostringstream oss;
    oss << task;
    RCLCPP_ERROR(logger, "Task details:\n%s", oss.str().c_str());
  }

  RCLCPP_INFO(logger, "Modular demo completed");

  // Wait for the spinning thread to finish
  spinning_thread.join();

  return 0;
}

Save the file, and close it.

Build the Code

cd ~/ros2_ws/
colcon build
source ~/.bashrc

Launch

Launch everything:

bash ~/ros2_ws/src/mycobot_ros2/mycobot_bringup/scripts/mycobot_280_mtc_demos.sh modular

OR

mtc_demos modular

Here is what you should see:

modular-moveit-task-constructor

Understanding the Motion Planning Results

RViz – “Motion Planning Tasks” Panel

The “Motion Planning Tasks” panel in RViz provides a detailed breakdown of our reusable motion planning task. It presents a hierarchical view with “Motion Planning Tasks” at the root, followed by “Reusable Containers”.

2-motion-planning-tasks-panel

Under “Reusable Containers“, we can see the following stages:

  1. current“: This represents the initial state of the robot.
  2. move to ready“: The first movement to get the robot into a ready position.
  3. Five “Cartesian Path” stages: These correspond to our reusable module, each containing:
    • “x +0.05”: Moving 5cm in the positive X direction
    • “y -0.02”: Moving 2cm in the negative Y direction
    • “rz -18°”: Rotating -18 degrees around the Z axis
    • “moveTo ready”: Returning to the ready position
  4. move to home“: The final movement to return the robot to its home position.

The second column shows green checkmarks and the number “1” for each stage, indicating that every step of the plan was successfully computed with one solution.

The “time” column displays the computational time for each component. We can see that the entire “Reusable Containers” task took 0.0383 seconds to compute, with individual stages taking milliseconds.

The “cost” column in this context represents a metric used by the motion planner. For most stages, it’s a very small value (0.0004 to 0.0017), meaning these movements are considered efficient or low-cost by the planner.

The “#” column consistently shows “1”, indicating that each stage has one solution.

The yellow highlighting on the “move to home” stage indicates that this is the currently selected or focused stage in the RViz interface.

This breakdown allows us to verify that our reusable module is indeed being repeated five times as intended, and that the overall motion plan is structured correctly with initial and final movements to ready and home positions.

Terminal Window – Planning Results

If you look at the terminal window, you’ll see the detailed planning results. Let’s interpret these outputs.

MoveIt Task Constructor uses a hierarchical planning approach. This means it breaks down the overall task into smaller, manageable stages and plans each stage individually while considering the connections between them.

  • Stage Creation: The terminal output shows each stage being added to the task, including the creation of the reusable module and its five instances.
  • Planning Process: After all stages are added, the planning process begins.

Arrow Interpretation in the Task State:

  • → (Right Arrow): Represents the forward flow of results from one stage to the next. This means that a stage has successfully generated a result, and it is passing that result to the next stage for further processing.
  • ← (Left Arrow): Indicates a backward flow of results. In MTC, some stages may require feedback from later stages to adjust their own results or to optimize the plan.
  • – (Dash): A dash indicates no information flowed in that direction.

Let’s analyze the task state output:

  1. The root “Reusable Containers” stage shows 1 – ← 1 → – 1, indicating one solution was found and propagated both forward and backward.
  2. For each stage, we see a pattern like this: – 0 → 1 → – 0 or – 0 → 1 → – 1
    • The first “0” means no solutions were propagated backward to this stage.
    • The “1” in the middle indicates one solution was found for this stage.
    • The last number (0 or 1) shows whether this solution was propagated forward to the next stage.
  3. The “Cartesian Path” stages, representing our reusable module, each show – 1 → 1 → – 1, meaning they received a solution from the previous stage, found their own solution, and passed it to the next stage.
  4. The individual movement stages (x +0.05, y -0.02, rz -18°) within each Cartesian Path show – 0 → 1 → – 0, indicating they found a solution but didn’t need to propagate it directly.
  5. The “moveTo ready” stages at the end of each Cartesian Path show – 0 → 1 → – 1, meaning they found a solution and passed it forward to the next module or final stage.

These results demonstrate that our planner effectively generated solutions for each stage of the task, including five repetitions of our reusable module. The hierarchical structure allowed the planner to solve each small part of the problem independently while maintaining the overall sequence of movements.

The Stage summary at the end confirms that each major stage (current, move to ready, five Cartesian Paths, and move to home) found one solution with no failures. This indicates a successful planning process for our entire reusable motion sequence.

5-published-solution

By examining these results, we can see how the modular approach allows for efficient planning of complex, repetitive tasks. Each instance of the reusable module is planned independently, but within the context of the overall task, ensuring a cohesive and executable motion plan for the robot arm.

Analysis of the Results

Let’s break down what we did and what we learned from this project.

Our Modular Approach

We created a reusable module consisting of four stages:

  1. Move 5 cm in +X direction
  2. Move 2 cm in -Y direction
  3. Rotate -18 degrees around Z axis
  4. Move to ‘ready’ position

This module was then repeated five times in our overall task, bookended by initial and final movements.

The Results: A Stage-by-Stage Breakdown

Looking at our terminal output and RViz Motion Planning Tasks panel, here’s what we observed:

Task Creation:

  • Successfully added all stages, including five instances of our reusable module
  • Each module creation was completed with 4 stages as designed

Planning Process:

  • The task planning completed successfully
  • Found 1 solution for the entire task

Detailed Task State:

  1. Root “Reusable Containers”: 1 – ← 1 → – 1
    • Indicates one solution was found and propagated both ways
  2. Individual Stages:
    • “current” and “move to ready”: – 0 → 1 → – 1
      • Successfully found a solution and passed it forward
    • Cartesian Path (reusable module): – 1 → 1 → – 1
      • Received a solution, found its own, and passed it forward
    • Individual movements (x, y, rz): – 0 → 1 → – 0
      • Found solutions but didn’t need to propagate directly
    • “moveTo ready” within modules: – 0 → 1 → – 1
      • Found a solution and passed it to the next stage
  3. Final “move to home”: – 0 → 1 → – 1
    • Successfully planned the final movement

Stage Summary

  • All stages (current, move to ready, five Cartesian Paths, move to home) found 1 solution with 0 failures.

The Big Picture

This experiment demonstrates several key advantages of our modular approach:

  1. Reusability: We successfully created a module that could be repeated multiple times within the larger task. This showcases the power of modular design in robotic motion planning.
  2. Efficiency: Each instance of our reusable module was planned independently, yet within the context of the overall task. This allows for efficient planning of complex, repetitive tasks.
  3. Robustness: The successful planning of all stages with no failures indicates that our modular approach is robust and can handle multiple repetitions of the same movement sequence.
  4. Flexibility: By breaking down the task into smaller, reusable components, we create a system that isadaptable. New movements or sequences can be added or modified without redesigning the entire task.
  5. Scalability: The ability to repeat our module five times without issues suggests that this approach could scale to even more complex sequences of movements.

By structuring our motion planning this way, we achieve a balance of simplicity and power. The reusable modules allow for faster development of complex tasks, while the hierarchical planning ensures that each part fits smoothly into the whole. 

Detailed Code Walkthrough

Now for the C++ part. Let’s go through each piece of this code, step by step.

cd ~/ros2_ws/src/mycobot_ros2/hello_moveit_task_constructor/src/
gedit modular.cpp

File Header and Includes

The code begins with a comprehensive comment block outlining the file’s purpose: demonstrating the use of MoveIt Task Constructor for robot motion planning. It introduces the key concept of SerialContainer, which is used to create reusable modules of movement stages. The file includes necessary headers for ROS 2, MoveIt, and the Task Constructor library, establishing the foundation for our modular motion planning demo.

createModule Function

This function creates a reusable module for robot movement:    

It sets up a SerialContainer named “Cartesian Path” and configures it with four stages:

  1. Move 5 cm in the positive X direction
  2. Move 2 cm in the negative Y direction
  3. Rotate -18 degrees around the Z axis
  4. Move to the “ready” position

Each stage is created using either stages::MoveRelative or stages::MoveTo, configured with the appropriate movement parameters, and added to the container.

createTask Function

This function creates the main task for robot movement:

It sets up the task with the following structure:

  • Add the current state as the starting point
  • Move to the “ready” position
  • Add five instances of the reusable module created by createModule
  • Move to the “home” position

This structure creates a sequence of movements that the robot will perform, repeating the same set of actions five times in a row.

Main Function

The main function orchestrates the entire demo.

ROS 2 Initialization and Node Setup

ROS 2 is initialized, and a node named “modular_demo” is created.

Spinning Thread

A separate thread is created to handle ROS 2 callbacks, allowing the node to process incoming messages and services.

Task Creation and Execution

The task is created using the createTask function. The code then attempts to plan the task.

Result Handling and Logging

The code includes comprehensive logging of the planning results, including the number of solutions found, the task state, and a summary of each stage’s performance.

Error Handling

The code includes error handling to catch and report any exceptions that occur during the planning process, including detailed task information in case of failure.

Completion

The program waits for the ROS 2 spinning thread to finish before exiting.

That’s it. Keep building!

Inverse Kinematics – ROS 2 Jazzy MoveIt Task Constructor

In this tutorial, we’ll explore how to implement inverse kinematics (IK) with clearance cost optimization using the MoveIt Task Constructor. We’ll create an application from scratch that demonstrates how to plan movements for a robotic arm while considering obstacle clearance. The output of your application will provide detailed insights into the planning process, including the number of solutions found and the performance of each stage.

Here is what your final output will look like (I am flipping back and forth between the two successfully found inverse kinematics solutions):

inverse-kinematics-solver-moveit-task-constructor

On a high level, your program will demonstrate a sophisticated approach to motion planning that does the following:

  1. Sets up a scene with the mycobot_280 robot and a spherical obstacle
  2. Defines a target pose for the robot’s gripper (end-effector)
  3. Uses the ComputeIK stage to find valid arm configurations reaching the target
  4. Applies a clearance cost term to favor solutions that keep the robot farther from obstacles
  5. Uses ROS 2 parameters to control the behavior of the clearance cost calculation

While OMPL and Pilz are motion planners that generate full trajectories, they rely on IK solutions like those computed in this code to determine feasible goal configurations for the robot. In a complete motion planning pipeline, this IK solver would typically be used to generate goal states, which OMPL or Pilz would then use to plan full, collision-free paths from the robot’s current position to the desired end-effector pose.

Real-World Use Cases

The code you will develop in this tutorial can serve as a foundation for various practical applications:

  • Robotic Assembly in Cluttered Environments
    • Generate arm configurations that avoid collisions with nearby parts or fixtures
    • Optimize for paths that maintain maximum clearance from obstacles
  • Bin Picking and Sorting
    • Plan motions that safely navigate around the edges of bins and other items
    • Minimize the risk of collisions in tight spaces
  • Collaborative Robot Operations
    • Ensure the robot maintains a safe distance from human work areas
    • Dynamically adjust paths based on changing obstacle positions
  • Quality Inspection Tasks
    • Generate smooth, collision-free paths for sensors or cameras to inspect parts
    • Optimize for viewpoints that balance clearance and inspection requirements

By the end of this tutorial, you’ll have a solid understanding of how to implement IK solutions with clearance cost optimization in your motion planning tasks. This approach will make your robotic applications more robust, efficient, and capable of operating safely in complex environments.

Let’s dive into the code and explore how to build this advanced motion planning application!

Prerequisites

All the code is here on my GitHub repository. Note that I am working with ROS 2 Jazzy, so the steps might be slightly different for other versions of ROS 2.

Create the Code

If you don’t already have ik_clearance_cost.cpp, open a new terminal window, and type:

cd ~/ros2_ws/src/mycobot_ros2/mycobot_mtc_demos/src/
gedit ik_clearance_cost.cpp

Add this code

/**
 * @file ik_clearance_cost.cpp
 * @brief Demonstrates using MoveIt Task Constructor for motion planning with collision avoidance.
 *
 * This program sets up a motion planning task for a mycobot_280 robot arm using MoveIt Task Constructor.
 * It creates a scene with an obstacle, computes inverse kinematics (IK) solutions, and plans a motion
 * while considering clearance from obstacles.
 *
 * @author Addison Sears-Collins
 * @date December 19, 2024
 */

#include <rclcpp/rclcpp.hpp>
#include <moveit/planning_scene/planning_scene.h>
#include <moveit/task_constructor/task.h>
#include <moveit/task_constructor/stages/fixed_state.h>
#include <moveit/task_constructor/stages/compute_ik.h>
#include <moveit/task_constructor/cost_terms.h>
#include "ik_clearance_cost_parameters.hpp"

// Use the moveit::task_constructor namespace for convenience
using namespace moveit::task_constructor;

/* ComputeIK(FixedState) */
int main(int argc, char** argv) {
  // Initialize ROS 2
  rclcpp::init(argc, argv);

  // Create a ROS 2 node
  auto node = rclcpp::Node::make_shared("ik_clearance_cost_demo");

  // Create a logger
  auto logger = node->get_logger();
  RCLCPP_INFO(logger, "Starting IK Clearance Cost Demo");

  // Start a separate thread to handle ROS 2 callbacks
  std::thread spinning_thread([node] { rclcpp::spin(node); });

  // Create a parameter listener for IK clearance cost parameters
  const auto param_listener = std::make_shared<ik_clearance_cost_demo::ParamListener>(node);
  const auto params = param_listener->get_params();
  RCLCPP_INFO(logger, "Parameters loaded: cumulative=%s, with_world=%s",
              params.cumulative ? "true" : "false",
              params.with_world ? "true" : "false");

  // Create a Task object to hold the planning stages
  Task t;
  t.stages()->setName("clearance IK");
  RCLCPP_INFO(logger, "Task created: %s", t.stages()->name().c_str());

  // Wait for 500 milliseconds to ensure ROS 2 is fully initialized
  rclcpp::sleep_for(std::chrono::milliseconds(500));

  // Load the robot model (mycobot_280)
  t.loadRobotModel(node);
  assert(t.getRobotModel()->getName() == "mycobot_280");
  RCLCPP_INFO(logger, "Robot model loaded: %s", t.getRobotModel()->getName().c_str());

  // Create a planning scene
  auto scene = std::make_shared<planning_scene::PlanningScene>(t.getRobotModel());
  RCLCPP_INFO(logger, "Planning scene created");

  // Set the robot to its default state
  auto& robot_state = scene->getCurrentStateNonConst();
  robot_state.setToDefaultValues();
  RCLCPP_INFO(logger, "Robot state set to default values");

  // Set the arm to its "ready" position
  [[maybe_unused]] bool found =
      robot_state.setToDefaultValues(robot_state.getJointModelGroup("arm"), "ready");
  assert(found);
  RCLCPP_INFO(logger, "Arm set to 'ready' position");

  // Create an obstacle in the scene
  moveit_msgs::msg::CollisionObject co;
  co.id = "obstacle";
  co.primitives.emplace_back();
  co.primitives[0].type = shape_msgs::msg::SolidPrimitive::SPHERE;
  co.primitives[0].dimensions.resize(1);
  co.primitives[0].dimensions[0] = 0.1;
  co.header.frame_id = t.getRobotModel()->getModelFrame();
  co.primitive_poses.emplace_back();
  co.primitive_poses[0].orientation.w = 1.0;
  co.primitive_poses[0].position.x = -0.183;
  co.primitive_poses[0].position.y = -0.14;
  co.primitive_poses[0].position.z = 0.15;
  scene->processCollisionObjectMsg(co);
  RCLCPP_INFO(logger, "Obstacle added to scene: sphere at position (%.2f, %.2f, %.2f) with radius %.2f",
              co.primitive_poses[0].position.x, co.primitive_poses[0].position.y,
              co.primitive_poses[0].position.z, co.primitives[0].dimensions[0]);

  // Create a FixedState stage to set the initial state
  auto initial = std::make_unique<stages::FixedState>();
  initial->setState(scene);
  initial->setIgnoreCollisions(true);
  RCLCPP_INFO(logger, "FixedState stage created");

  // Create a ComputeIK stage for inverse kinematics
  auto ik = std::make_unique<stages::ComputeIK>();
  ik->insert(std::move(initial));
  ik->setGroup("arm");

  // Set the target pose
  ik->setTargetPose(Eigen::Translation3d(-.183, 0.0175, .15) * Eigen::AngleAxisd(M_PI/4, Eigen::Vector3d::UnitX()));

  ik->setTimeout(1.0);
  ik->setMaxIKSolutions(100);

  // Set up the clearance cost term
  auto cl_cost{ std::make_unique<cost::Clearance>() };
  cl_cost->cumulative = params.cumulative;
  cl_cost->with_world = params.with_world;
  ik->setCostTerm(std::move(cl_cost));
  RCLCPP_INFO(logger, "Clearance cost term added to ComputeIK stage");

  // Add the ComputeIK stage to the task
  t.add(std::move(ik));
  RCLCPP_INFO(logger, "ComputeIK stage added to task");

  // Attempt to plan the task
  try {
    RCLCPP_INFO(logger, "Starting task planning");

    // Plan the task
    moveit::core::MoveItErrorCode error_code = t.plan(0);

    // Log the planning result
    if (error_code == moveit::core::MoveItErrorCode::SUCCESS) {
      RCLCPP_INFO(logger, "Task planning completed successfully");
      RCLCPP_INFO(logger, "Found %zu solutions", t.numSolutions());

      // Use printState to log the task state
      std::ostringstream state_stream;
      t.printState(state_stream);
      RCLCPP_INFO(logger, "Task state:\n%s", state_stream.str().c_str());

    } else {
      RCLCPP_ERROR(logger, "Task planning failed with error code: %d", error_code.val);

      // Use explainFailure to log the reason for failure
      std::ostringstream failure_stream;
      t.explainFailure(failure_stream);
      RCLCPP_ERROR(logger, "Failure explanation:\n%s", failure_stream.str().c_str());
    }

    // Log a simple summary of each stage
    RCLCPP_INFO(logger, "Stage summary:");
    for (size_t i = 0; i < t.stages()->numChildren(); ++i) {
      const auto* stage = t.stages()->operator[](i);
      RCLCPP_INFO(logger, "  %s: %zu solutions, %zu failures",
                  stage->name().c_str(), stage->solutions().size(), stage->failures().size());
    }

  } catch (const InitStageException& e) {
    RCLCPP_ERROR(logger, "InitStageException caught during task planning: %s", e.what());
  }

  RCLCPP_INFO(logger, "IK Clearance Cost Demo completed");

  // Wait for the spinning thread to finish
  spinning_thread.join();

  return 0;
}

Save the file, and close it.

Add the Parameters

Now let’s create a parameter file in the same directory as our source code.

gedit ik_clearance_cost_parameters.yaml

Add this code.

ik_clearance_cost_demo:
  cumulative:
    type: bool
    default_value: false
    read_only: true
  with_world:
    type: bool
    default_value: true
    read_only: true

The “cumulative” parameter determines how the robot measures its closeness to obstacles. 

  • When set to false, the robot only considers its single closest point to any obstacle. 
  • When set to true, it considers the distance of multiple points on the robot to obstacles, adding these distances together. 

This “cumulative” approach provides a more thorough assessment of the robot’s overall proximity to obstacles, potentially leading to more cautious movements. 

The “with_world” parameter determines what the robot considers as obstacles when planning its movements. 

  • When set to true, the robot takes into account all known objects in its environment – this could include tables, chairs, walls, or any other obstacles that have been mapped or sensed. It’s like the robot is aware of its entire surroundings. 
  • When set to false, the robot might only consider avoiding collisions with itself (self-collisions) or a specific subset of objects, ignoring the broader environment. 

Save the file, and close it.

Build the Code

cd ~/ros2_ws/
colcon build
source ~/.bashrc 

Launch

Launch everything:

bash ~/ros2_ws/src/mycobot_ros2/mycobot_bringup/scripts/mycobot_280_mtc_demos.sh ik_clearance_cost

OR

mtc_demos ik_clearance_cost

Here is what you should see:

ik-clearance-moveit-task-constructor

Understanding the Motion Planning Results

RViz – “Motion Planning Tasks” Panel

The “Motion Planning Tasks” panel in RViz displays the structure and outcomes of our IK clearance cost task. The panel shows a hierarchical view with “Motion Planning Tasks” at the root, followed by “clearance IK”.

2-motion-planning-tasks

Under “clearance IK“, two stages are visible:

  1. “IK”: This represents the ComputeIK stage where inverse kinematics solutions are generated.
  2. “initial state”: This corresponds to the FixedState stage that sets the initial robot configuration.

The second column shows green checkmarks and numbers indicating the quantity of successful solutions for each task component. The image reveals that 2 solutions were found for the overall “clearance IK” task, both originating from the “IK” stage.

The “time” column displays the computational time for each component. For the “IK” stage, we see a value of 1.0055 seconds, indicating the duration of the inverse kinematics calculations.

The “cost” column is particularly noteworthy in this context. For the successful IK solutions, we observe a cost value of 66.5330. This cost is directly related to the clearance cost term we incorporated into our ComputeIK stage. 

The “comment” column provides additional context for the solutions. It displays the clearance distances between the obstacle and a specific robot part, “gripper_left1”. This information quantifies how the robot positions itself relative to the obstacle in the computed solutions.

Terminal Window – Planning Results

Analyzing the terminal output of our IK clearance cost demo:

  1. The mycobot_280 robot model was loaded successfully.
  2. A planning scene was generated, and the robot was positioned in its ‘ready’ configuration.
  3. An obstacle, represented by a sphere, was introduced to the scene at coordinates (-0.18, -0.14, 0.15) with a 0.10 radius.
  4. The FixedState and ComputeIK stages were established and incorporated into the task.
  5. Task planning concluded successfully, yielding 2 solutions.

Analyzing the terminal output of our IK clearance cost demo, we see the following task structure:

3-planning-results

This structure provides insights into the flow of solutions through our planning task:

  1. clearance IK (top level):
    • 2 solutions were propagated backward
    • 2 solutions were propagated forward
    • 2 solutions were ultimately generated
  2. IK stage:
    • 2 solutions were generated at this stage
    • 2 solutions were propagated backward
    • 2 solutions were propagated forward
  3. initial state:
    • 1 solution was generated at this stage
    • 1 solution was propagated backward
    • 1 solution was propagated forward

This output demonstrates the bidirectional nature of the planning process in the MoveIt Task Constructor. The initial state provides a starting point, which is then used by the IK stage to generate solutions. These solutions are propagated both forward and backward through the planning pipeline.

The fact that we see two solutions at the IK stage indicates that our ComputeIK stage, incorporating the clearance cost term, successfully found two distinct inverse kinematics solutions that satisfied our constraints. These solutions maintained sufficient clearance from the obstacle while reaching the target pose.

The propagation of these two solutions both forward and backward means they were feasible within the context of both the initial state and the overall task requirements. This bidirectional flow helps ensure that the generated solutions are consistent and achievable given the robot’s starting configuration and the task’s goals.

By examining these results in conjunction with the RViz visualization, you can gain a comprehensive understanding of how the robot’s configuration changes to maintain clearance from the obstacle while achieving the desired pose of the gripper.

Detailed Code Walkthrough

Now for the C++ part. Let’s go through each piece of this code, step by step.

cd ~/ros2_ws/src/mycobot_ros2/hello_moveit_task_constructor/src/
gedit ik_clearance_cost.cpp

File Header and Includes

The code begins with a comprehensive comment block outlining the file’s purpose: demonstrating motion planning with collision avoidance using the MoveIt Task Constructor. It describes the program’s functionality, which creates a scene with an obstacle and computes inverse kinematics (IK) solutions while considering clearance from obstacles. 

The file includes necessary headers for ROS 2, MoveIt, and the Task Constructor library, establishing the foundation for our IK clearance cost demo.

Main Function

All the logic for this program is contained within the main function. Let’s break it down into its key components.

ROS 2 Initialization and Node Setup

The code initializes ROS 2 and creates a node named “ik_clearance_cost_demo”. It sets up a logger for informational output. This setup ensures proper communication within the ROS 2 ecosystem.

Parameter Handling

The code sets up a parameter listener to load and manage parameters for the IK clearance cost demo. It logs whether the clearance cost should be cumulative and whether it should consider the world.

Task Setup and Robot Model Loading

A Task object is created and named “clearance IK”. The robot model (“mycobot_280”) is loaded and verified. This step is important for accurate motion planning based on the specific robot’s characteristics.

Planning Scene Setup

The code creates a planning scene, sets the robot to its default state, and then sets the arm to its “ready” position. 

Obstacle Creation

An obstacle (a sphere) is created and added to the planning scene. This obstacle will be considered during the IK calculations to ensure clearance.

FixedState Stage Setup

A FixedState stage is created to set the initial state of the robot. It uses the previously configured scene and ignores collisions at this stage.

ComputeIK Stage Setup

A ComputeIK stage is created for inverse kinematics calculations. It’s configured with the initial state, target group (“arm”), target pose, timeout, and maximum number of IK solutions to compute.

Clearance Cost Term Setup

A clearance cost term is created and added to the ComputeIK stage. This cost term will influence the IK solutions to maintain clearance from obstacles.

Task Planning and Execution

The code attempts to plan the task using the defined stages. It includes error handling for potential exceptions during planning, ensuring robustness in various scenarios.

Results Logging

The code logs the results of the planning process, including the number of solutions found, the task state, or failure explanations if the planning was unsuccessful.

Node Spinning

A separate thread is created for spinning the ROS 2 node. This allows the program to handle callbacks and events while performing its main tasks.

That’s it. Keep building!