Location
https://www.kennesaw.edu/ccse/events/computing-showcase/sp25-cday-program.php
Document Type
Event
Start Date
15-4-2025 4:00 PM
Description
This project explores the implementation of real-time object detection using the You Only Look Once (YOLO) architecture. Leveraging its speed and accuracy, we developed a system capable of identifying and localizing multiple objects within live video streams. Our implementation focused on optimizing YOLO's performance for real-time applications, specifically addressing the trade-off between speed and accuracy. We employed a pre-trained YOLO model and fine-tuned it on a custom dataset tailored to specific object classes. This fine-tuning process aimed to enhance the model's ability to recognize objects in our target environment. The system was implemented using Python and the OpenCV library, enabling seamless integration with camera input and real-time video processing. Performance was evaluated based on frames per second (FPS), mean Average Precision (mAP), and detection latency. Results demonstrate the system's capability to achieve high FPS, facilitating real-time object detection, while maintaining acceptable mAP for accurate object recognition. This project showcases the practicality of YOLO for applications requiring fast and reliable object detection, such as surveillance, autonomous driving, and robotics.
Included in
GC-074 Real-Time Object Detection
https://www.kennesaw.edu/ccse/events/computing-showcase/sp25-cday-program.php
This project explores the implementation of real-time object detection using the You Only Look Once (YOLO) architecture. Leveraging its speed and accuracy, we developed a system capable of identifying and localizing multiple objects within live video streams. Our implementation focused on optimizing YOLO's performance for real-time applications, specifically addressing the trade-off between speed and accuracy. We employed a pre-trained YOLO model and fine-tuned it on a custom dataset tailored to specific object classes. This fine-tuning process aimed to enhance the model's ability to recognize objects in our target environment. The system was implemented using Python and the OpenCV library, enabling seamless integration with camera input and real-time video processing. Performance was evaluated based on frames per second (FPS), mean Average Precision (mAP), and detection latency. Results demonstrate the system's capability to achieve high FPS, facilitating real-time object detection, while maintaining acceptable mAP for accurate object recognition. This project showcases the practicality of YOLO for applications requiring fast and reliable object detection, such as surveillance, autonomous driving, and robotics.