Presenter Information

Nursat JahanFollow

Location

https://www.kennesaw.edu/ccse/events/computing-showcase/fa25-cday-program.php

Document Type

Event

Start Date

24-11-2025 4:00 PM

Description

Edge Artificial Intelligence (AI) refers to running AI inference directly on local devices such as wearables, sensors, and mobile systems rather than relying on cloud computing. The growth of Edge AI has created strong demand for efficient inference on resource-limited devices. Edge AI devices must perform real-time inference while operating under strict battery constraints. Although significant model optimizations exist for managing power-intensive inference models, operating system (OS) level support is limited. Existing OS schedulers often neglect energy limits in edge devices as they prioritize fairness or throughput. In this research we proposed an OS level framework to bridge this gap by introducing a Dynamic Energy-Aware Scheduler (DEAS) that adjusts CPU frequency and number of active cores based on workload conditions to reduce energy per inference. The preliminary simulation results show improved performance per watt, confirming that dynamic scheduling at the OS-level can significantly reduce energy consumption in AI edge systems.

Share

COinS
 
Nov 24th, 4:00 PM

GRP-20185 Energy-Aware Operating Systems for Edge Artificial Intelligence Inference

https://www.kennesaw.edu/ccse/events/computing-showcase/fa25-cday-program.php

Edge Artificial Intelligence (AI) refers to running AI inference directly on local devices such as wearables, sensors, and mobile systems rather than relying on cloud computing. The growth of Edge AI has created strong demand for efficient inference on resource-limited devices. Edge AI devices must perform real-time inference while operating under strict battery constraints. Although significant model optimizations exist for managing power-intensive inference models, operating system (OS) level support is limited. Existing OS schedulers often neglect energy limits in edge devices as they prioritize fairness or throughput. In this research we proposed an OS level framework to bridge this gap by introducing a Dynamic Energy-Aware Scheduler (DEAS) that adjusts CPU frequency and number of active cores based on workload conditions to reduce energy per inference. The preliminary simulation results show improved performance per watt, confirming that dynamic scheduling at the OS-level can significantly reduce energy consumption in AI edge systems.