Real Time Human Fall Detection with Artificial Intelligence Using an Infared Camera on an Unmanned Aerial Vehicle

Disciplines

Other Aerospace Engineering | Other Computer Engineering | Systems Engineering and Multidisciplinary Design Optimization

Abstract (300 words maximum)

The use of Artificial Intelligence (AI) has dramatically increased in the lives of average Americans. This growing use in AI can range from reading articles, answering questions, generating media, or even identifying objects (annotation) within existing media, such as pictures and videos. Research on AI’s ability to annotate media has grown in popularity as more open-source models have become publicly available. However, these annotations are limited by the AI model and not being able to reliably identify a subject from varying angles and distances. Kennesaw State University’s Aerospace Education and Research Organization (AERO) Lab is researching live, AI-assisted object identification utilizing aerial footage. The researchers are doing this by employing modern Commercial Off The Shelf (COTS), Unmanned Aerial Vehicles (UAV) to provide high quality aerial footage. They use said footage to train an open-source AI model which specializes in object identification known as You Only Look Once (YOLO), to excel in annotating common objects such as people, cars, bikes, etc. from arial pictures and videos. The AERO Lab has been able to use this custom AI to dramatically increase the ability in identifying a wide range of objects, increasing the confidence level of the annotation, giving specified object orientation (sitting, standing, laying down), and the ability to vary angles and distances of the subject in the footage without a significant reduction in confidence level. These tasks are completed while providing the annotations in real time, on a monitor for the user to see. The AI is trained not only in Red, Green, Blue (RGB) sources, but also thermal and infrared, which allows for use of the model at night and in heavily forested areas. The researchers aim to use this ability to aid in human search and rescue missions, as well as disaster relief for locating survivors.

Use of AI Disclaimer

no

Academic department under which the project should be listed

SPCEET – Mechanical Engineering

Primary Investigator (PI) Name

Adeel Khalid

This document is currently not available here.

Share

COinS
 

Real Time Human Fall Detection with Artificial Intelligence Using an Infared Camera on an Unmanned Aerial Vehicle

The use of Artificial Intelligence (AI) has dramatically increased in the lives of average Americans. This growing use in AI can range from reading articles, answering questions, generating media, or even identifying objects (annotation) within existing media, such as pictures and videos. Research on AI’s ability to annotate media has grown in popularity as more open-source models have become publicly available. However, these annotations are limited by the AI model and not being able to reliably identify a subject from varying angles and distances. Kennesaw State University’s Aerospace Education and Research Organization (AERO) Lab is researching live, AI-assisted object identification utilizing aerial footage. The researchers are doing this by employing modern Commercial Off The Shelf (COTS), Unmanned Aerial Vehicles (UAV) to provide high quality aerial footage. They use said footage to train an open-source AI model which specializes in object identification known as You Only Look Once (YOLO), to excel in annotating common objects such as people, cars, bikes, etc. from arial pictures and videos. The AERO Lab has been able to use this custom AI to dramatically increase the ability in identifying a wide range of objects, increasing the confidence level of the annotation, giving specified object orientation (sitting, standing, laying down), and the ability to vary angles and distances of the subject in the footage without a significant reduction in confidence level. These tasks are completed while providing the annotations in real time, on a monitor for the user to see. The AI is trained not only in Red, Green, Blue (RGB) sources, but also thermal and infrared, which allows for use of the model at night and in heavily forested areas. The researchers aim to use this ability to aid in human search and rescue missions, as well as disaster relief for locating survivors.