Field-Ready Robotics: Sensor- Based Crop Monitoring and Precision Farming​

Disciplines

Robotics

Abstract (300 words maximum)

Monitoring the health of crops is a critical part of ensuring consistent and high-quality output for farmers. A powerful method for evaluating the health of crops is Multispectral imaging, which uses a Red-Green-Near-Infrared camera to extract a variety of visual health indicators known as vegetative indices. Although this approach is effective, it is not available to all farmers as the cost of field ready multispectral cameras can often be out of budget. In our research, we aim to tackle this problem by extracting RGN vegetative indices from Red Green images. We implement an Image Generation Model based on Artificial Neural Networks, using captured Red-Green-Near-Infrared images and their subsequent Red-Green images to train the model. The specific use case evaluated is predicting the Normalized Difference Vegetative Index (NDVI) result of a Red-Green image, as the NDVI is the most common index utilized for visual health monitoring. Ultimately, this generative approach would allow for the use of a consumer grade Red-Green-Blue camera to collect pertinent crop health information. This results in savings for the farmers while still being able to monitor the health of their crops effectively.

Use of AI Disclaimer

no

Academic department under which the project should be listed

SPCEET – Robotics and Mechatronics Engineering

Primary Investigator (PI) Name

Muhammad Hassan Tanveer

This document is currently not available here.

Share

COinS
 

Field-Ready Robotics: Sensor- Based Crop Monitoring and Precision Farming​

Monitoring the health of crops is a critical part of ensuring consistent and high-quality output for farmers. A powerful method for evaluating the health of crops is Multispectral imaging, which uses a Red-Green-Near-Infrared camera to extract a variety of visual health indicators known as vegetative indices. Although this approach is effective, it is not available to all farmers as the cost of field ready multispectral cameras can often be out of budget. In our research, we aim to tackle this problem by extracting RGN vegetative indices from Red Green images. We implement an Image Generation Model based on Artificial Neural Networks, using captured Red-Green-Near-Infrared images and their subsequent Red-Green images to train the model. The specific use case evaluated is predicting the Normalized Difference Vegetative Index (NDVI) result of a Red-Green image, as the NDVI is the most common index utilized for visual health monitoring. Ultimately, this generative approach would allow for the use of a consumer grade Red-Green-Blue camera to collect pertinent crop health information. This results in savings for the farmers while still being able to monitor the health of their crops effectively.