Search and Rescue Operations Utilizing a Robot Dog With Custom YOLO V8 Models and Depth Camera Data

Disciplines

Other Engineering | Robotics

Abstract (300 words maximum)

Natural disasters are occurring more frequently across the globe, post analysis and data capture for these events are vital for search and rescue operations. The need for environment mapping by identifying objects is evident, particularly in navigating complex disaster scenarios such as individuals trapped under debris. In post-disaster scenarios, search and rescue (SAR) teams face the challenge of processing extensive data to locate trapped individuals. Implementing a machine learning model can streamline this process by efficiently scanning environments to identify potential survivors. This research explores the application of custom datasets to create a YOLO V8 model to optimize human detection in conjunction. It integrates this model with a robot dog equipped with a depth camera, enabling better analysis of environments with uneven terrains. The model is trained on images of trapped individuals under rubble in search and rescue scenarios. These images are labeled using ROBOFLOW to create a custom dataset for training a YOLO V8 model. The integration of YOLO V8's bounding box and segmentation data with the Intel RealSense depth camera aids in determining the exact location of the trapped person. The collected data points are crucial in determining the optimal angle of rescue, and the incorporation of a colored map and grayscale video further aids in distinguishing debris from human subjects. Experimental results highlight the effectiveness of the custom dataset in conjunction with YOLO V8 for human trapped location detection in rubble environments. Furthermore, the visual accuracy is confirmed through displayed segmentation of human subjects in different rubble scenarios. Accuracy of the trained model is represented by analytical graphs derived from validation test data, which show the precision of different model weights. Integrating machine learning models and robust robotic mobility can enhance the efficiency and effectiveness of disaster response efforts.

Academic department under which the project should be listed

SPCEET - Robotics and Mechatronics Engineering

Primary Investigator (PI) Name

Muhammad Hassan Tanveer

This document is currently not available here.

Share

COinS
 

Search and Rescue Operations Utilizing a Robot Dog With Custom YOLO V8 Models and Depth Camera Data

Natural disasters are occurring more frequently across the globe, post analysis and data capture for these events are vital for search and rescue operations. The need for environment mapping by identifying objects is evident, particularly in navigating complex disaster scenarios such as individuals trapped under debris. In post-disaster scenarios, search and rescue (SAR) teams face the challenge of processing extensive data to locate trapped individuals. Implementing a machine learning model can streamline this process by efficiently scanning environments to identify potential survivors. This research explores the application of custom datasets to create a YOLO V8 model to optimize human detection in conjunction. It integrates this model with a robot dog equipped with a depth camera, enabling better analysis of environments with uneven terrains. The model is trained on images of trapped individuals under rubble in search and rescue scenarios. These images are labeled using ROBOFLOW to create a custom dataset for training a YOLO V8 model. The integration of YOLO V8's bounding box and segmentation data with the Intel RealSense depth camera aids in determining the exact location of the trapped person. The collected data points are crucial in determining the optimal angle of rescue, and the incorporation of a colored map and grayscale video further aids in distinguishing debris from human subjects. Experimental results highlight the effectiveness of the custom dataset in conjunction with YOLO V8 for human trapped location detection in rubble environments. Furthermore, the visual accuracy is confirmed through displayed segmentation of human subjects in different rubble scenarios. Accuracy of the trained model is represented by analytical graphs derived from validation test data, which show the precision of different model weights. Integrating machine learning models and robust robotic mobility can enhance the efficiency and effectiveness of disaster response efforts.