Optimizing Object Manipulation and Grasp Point Detection Using Stereo Camera
Disciplines
Robotics
Abstract (300 words maximum)
Existing robotic arm systems often rely on pre-trained models or operate within controlled environments using known positions or fiducial markers for object manipulation. Although these methods guarantee a considerable level of accuracy, they are limited by lengthy training processes and the need of proper controlled setups. This research explores a methodology that leverages Stereo cameras to develop a generalized algorithm, aiming to overcome these limitations to enable more versatile robotic object manipulation in uncontrolled environments. This research’s proposed solution uses a Intel Realsense Stereo camera with YOLO models for initial object identification and combines its segmentation meshes along with camera’s depth values to construct a detailed 3D point cloud around the detected object. By analyzing this point cloud through various feature extraction techniques such as RANSAC and passing it through our proposed filters, it predicts a mostly complete stable shape of the object. The grasp point is then calculated by identifying surfaces within the shape that can provide high stability and minimize potential slippage, followed by calculating an optimal approach angle and position for the robotic gripper. This approach enables robotic arm systems to autonomously adapt to a variety of objects in uncontrolled environments, significantly expanding their practical applications beyond traditional setups. For example, it could support modular robotic systems designed for sample gathering on extraterrestrial missions, where adaptability and autonomous operations are crucial.
Academic department under which the project should be listed
SPCEET - Robotics and Mechatronics Engineering
Primary Investigator (PI) Name
Muhammad Hassan Tanveer
Optimizing Object Manipulation and Grasp Point Detection Using Stereo Camera
Existing robotic arm systems often rely on pre-trained models or operate within controlled environments using known positions or fiducial markers for object manipulation. Although these methods guarantee a considerable level of accuracy, they are limited by lengthy training processes and the need of proper controlled setups. This research explores a methodology that leverages Stereo cameras to develop a generalized algorithm, aiming to overcome these limitations to enable more versatile robotic object manipulation in uncontrolled environments. This research’s proposed solution uses a Intel Realsense Stereo camera with YOLO models for initial object identification and combines its segmentation meshes along with camera’s depth values to construct a detailed 3D point cloud around the detected object. By analyzing this point cloud through various feature extraction techniques such as RANSAC and passing it through our proposed filters, it predicts a mostly complete stable shape of the object. The grasp point is then calculated by identifying surfaces within the shape that can provide high stability and minimize potential slippage, followed by calculating an optimal approach angle and position for the robotic gripper. This approach enables robotic arm systems to autonomously adapt to a variety of objects in uncontrolled environments, significantly expanding their practical applications beyond traditional setups. For example, it could support modular robotic systems designed for sample gathering on extraterrestrial missions, where adaptability and autonomous operations are crucial.