Location

https://www.kennesaw.edu/ccse/events/computing-showcase/fa25-cday-program.php

Streaming Media

Document Type

Event

Start Date

24-11-2025 4:00 PM

Description

Diabetic Retinopathy (DR) is a major cause of avoidable blindness among diabetic patients worldwide. Early screening is critical, but manual diagnosis is time-consuming and requires specialists. This paper presents a deep learning system to automatically analyze retinal fundus images and perform a focused, binary classification to distinguish between 'No DR' (Healthy) and 'Severe-Stage DR' (Severe/Proliferative). We benchmark three prominent architectures: a ResNet-50, an EfficientNet-B0, and a Vision Transformer (ViT-B/16). The models are trained and evaluated on a custom-balanced, binary dataset derived from the APTOS 2019 collection. We conduct two experiments, one with a small dataset (N=500) and one with a larger dataset (N=976). Our results show that all three models achieve almost perfect performance on this simplified binary task, with evaluation accuracies, F1 Scores, and AUCs.We further employ Grad-CAM for model interpretability, which reveals that while all models perform well, the CNNs' areas of focus align more consistently with clinical pathology than the ViT. This work confirms the high viability of deep learning for a targeted, binary DR screening task.

Share

COinS
 
Nov 24th, 4:00 PM

GRM-1249 AI-assisted Diabetic Retinopathy Screening from Fundus Images

https://www.kennesaw.edu/ccse/events/computing-showcase/fa25-cday-program.php

Diabetic Retinopathy (DR) is a major cause of avoidable blindness among diabetic patients worldwide. Early screening is critical, but manual diagnosis is time-consuming and requires specialists. This paper presents a deep learning system to automatically analyze retinal fundus images and perform a focused, binary classification to distinguish between 'No DR' (Healthy) and 'Severe-Stage DR' (Severe/Proliferative). We benchmark three prominent architectures: a ResNet-50, an EfficientNet-B0, and a Vision Transformer (ViT-B/16). The models are trained and evaluated on a custom-balanced, binary dataset derived from the APTOS 2019 collection. We conduct two experiments, one with a small dataset (N=500) and one with a larger dataset (N=976). Our results show that all three models achieve almost perfect performance on this simplified binary task, with evaluation accuracies, F1 Scores, and AUCs.We further employ Grad-CAM for model interpretability, which reveals that while all models perform well, the CNNs' areas of focus align more consistently with clinical pathology than the ViT. This work confirms the high viability of deep learning for a targeted, binary DR screening task.