Enhanced Temporal and Space Awareness for Edge AI and Multimodal Sensing
Abstract (300 words maximum)
Autonomous indoor environments require continuous awareness of occupants and conditions to maintain safety and comfort, yet many systems monitor limited signals or rely on cloud processing that adds latency and privacy risk. This project develops a real-time environmental and visual awareness module on an embedded edge-AI platform that fuses ambient sensing with on-board computer vision. The hardware stack integrates sensors for air quality, volatile compounds, illuminance, temperature, and acoustics, along with interfaces for bidirectional interaction via voice, text, visual cues, or haptics. Depth perception from structured-light, time-of-flight, or stereo sensors enables person and posture analysis. The software stack uses lightweight, hardware-optimized libraries for device I/O, messaging, and model execution. Each sensor is individually validated for stable drivers, accurate units, and synchronized sampling, feeding a unified pipeline that timestamps and buffers data on-device. A vision module performs person detection and tracks postural cues such as prone states or sudden vertical changes using object detection and temporal tracking. Ambient data provide context for adaptive responses—for example, increasing task lighting under low exposure or flagging air-quality deviations during occupancy. Processing remains entirely on the edge to minimize latency and protect privacy, with optional short diagnostic buffers for debugging. Initial results demonstrate stable sensor readouts, reliable person detection under typical indoor lighting, consistent identification of staged fall events, and coherent multimodal fusion linking environmental shifts with visual observations. Future work includes cross-modal anomaly scoring, confidence-based rechecks under occlusion or glare, local dashboard notifications, and calibrated thresholds across diverse zones. Together, these advances establish a foundation for context-aware safety and comfort in laboratories, clinics, classrooms, and smart-home environments.
*Use of AI to keep within the Maximum word limit and to also correct the grammar of the abstract*
Use of AI Disclaimer
yes
Academic department under which the project should be listed
SPCEET – Robotics and Mechatronics Engineering
Primary Investigator (PI) Name
Razvan Voicu
Enhanced Temporal and Space Awareness for Edge AI and Multimodal Sensing
Autonomous indoor environments require continuous awareness of occupants and conditions to maintain safety and comfort, yet many systems monitor limited signals or rely on cloud processing that adds latency and privacy risk. This project develops a real-time environmental and visual awareness module on an embedded edge-AI platform that fuses ambient sensing with on-board computer vision. The hardware stack integrates sensors for air quality, volatile compounds, illuminance, temperature, and acoustics, along with interfaces for bidirectional interaction via voice, text, visual cues, or haptics. Depth perception from structured-light, time-of-flight, or stereo sensors enables person and posture analysis. The software stack uses lightweight, hardware-optimized libraries for device I/O, messaging, and model execution. Each sensor is individually validated for stable drivers, accurate units, and synchronized sampling, feeding a unified pipeline that timestamps and buffers data on-device. A vision module performs person detection and tracks postural cues such as prone states or sudden vertical changes using object detection and temporal tracking. Ambient data provide context for adaptive responses—for example, increasing task lighting under low exposure or flagging air-quality deviations during occupancy. Processing remains entirely on the edge to minimize latency and protect privacy, with optional short diagnostic buffers for debugging. Initial results demonstrate stable sensor readouts, reliable person detection under typical indoor lighting, consistent identification of staged fall events, and coherent multimodal fusion linking environmental shifts with visual observations. Future work includes cross-modal anomaly scoring, confidence-based rechecks under occlusion or glare, local dashboard notifications, and calibrated thresholds across diverse zones. Together, these advances establish a foundation for context-aware safety and comfort in laboratories, clinics, classrooms, and smart-home environments.
*Use of AI to keep within the Maximum word limit and to also correct the grammar of the abstract*