Forest fires cause huge losses and are a serious problem facing many countries worldwide, including the USA, Canada, Brazil, Siberia, and Indonesia, to name a few. Automatic identification of forest fires in an image is thus an important field to research in order to minimize disasters while also helping in mitigation planning and designing rescue tactics. Artificial Intelligence technologies, especially deep neural networks, have emerged recently with promises to detect fires with better accuracy from an image. However, the massive energy consumption of deep neural networks thwarts their widespread adoption, especially when it comes to onsite detection of fire utilizing low-power devices such as those embedded in a drone or an artificial satellite. In this paper, we develop multiple deep neural network models such as a Convolutional Neural Network (CNN), a Deep Belief Network (DBN), an Auto Encoder (AEnc), and a U-net model to detect forest fires and systematically analyze their accuracy and energy consumption using IEEE FLAME Dateset which is openly available at IEEE data portal. After developing the models, we systematically pruned the models, retrained them, and analyzed their accuracy and energy consumption upon deployment. Our analysis shows that the CNN has the highest accuracy (almost 99%) on the validation data set, whereas the DBN model consumes the least amount of energy after deploying on both CPU and GPU. The trained models are deployed on a website for use. The source code can be found on GitHub (https://github.com/akdasUAF/ForestFireDetection).

Included in

Geography Commons