Kuka motion planning similation

Brief Overview

Track the pen autonomously using Computer Vision, grab the pen and pass it to the next robot.

Video demo

Image

Project Description

Performed background subtraction and color thresholding with a RGBD camera to determine the 3D location of pen. Coded a Pincher X100 4-DOF robot arm in Python to grab the pen through inverse kinematics.

Tasks/Steps

  1. The code utilizes the pyrealsense2 library for RealSense camera interaction, along with numpy and cv2 for general image processing.
  2. It imports the InterbotixManipulatorXS class from the interbotix_xs_modules.xs_robot.arm module for robot arm control.
  3. Initializes the robot arm (bot) and sets it to the sleep pose, releases the gripper, and adjusts gripper pressure.
  4. Configures the RealSense pipeline for depth and color streams with specified resolutions.
  5. Starts the RealSense pipeline, retrieves device information, including intrinsic parameters for color and depth streams.
  6. Captures frames from the RealSense camera, aligns depth and color frames, and removes the background based on depth information.
  7. Uses color thresholding and contour detection to identify the object (pen) location in the color image.
  8. Calculates the centroid and average depth of the detected object.
  9. Converts pixel coordinates and depth to real-world coordinates.
  10. Displays the color image with the identified object, its centroid, and real-world coordinates.
  11. Allows user interaction using keyboard keys (h, s, c, o, l, g) for various actions such as moving the arm, releasing/grasping the gripper, and setting end-effector poses.
  12. Pressing 'q' or 'Esc' closes the image window and stops the RealSense pipeline.