Visual Sensor Fusion for Robotic Intelligence on Edge.

Disciplines

Artificial Intelligence and Robotics | Computer Sciences | Electrical and Computer Engineering

Abstract (300 words maximum)

Visual processing tasks such as detection, tracking, and localization are essential to the automation of unmanned aerial vehicles (UAV), robots, surveillance, and defense systems. However, these intelligent tasks become challenging in high-speed motion and limited computing resources, and low power supplies. This research focuses on exploring a brain-inspired framework to process the visual information from two complementary visual sensors, event-based cameras, and frame-based standard cameras, in a sensor-fusion style. Event cameras are a novel class of visual sensors that generate asynchronous events when the illumination of pixels changes in the field. Compared to standard cameras, DVS holds advantages in low latency (high temporal resolution, 10μs vs 3ms), high dynamic range (140dB vs 60dB), and low power consumption (10mW vs 3W). Thus, it is specialized to capture high-speed motion, which blurs a frame-based camera. The specific objective is to merge the advantages of both cameras to address the challenge of high-speed and energy-efficient visual processing with end-to-end closed-loop control for small robots and drones. In this work, we will demonstrate our current progress in designing the fusion framework for two visual tasks, tracking and SLAM, on drone and robot cars.

Academic department under which the project should be listed

SPCEET - Electrical and Computer Engineering

Primary Investigator (PI) Name

Yan Fang

This document is currently not available here.

Share

COinS
 

Visual Sensor Fusion for Robotic Intelligence on Edge.

Visual processing tasks such as detection, tracking, and localization are essential to the automation of unmanned aerial vehicles (UAV), robots, surveillance, and defense systems. However, these intelligent tasks become challenging in high-speed motion and limited computing resources, and low power supplies. This research focuses on exploring a brain-inspired framework to process the visual information from two complementary visual sensors, event-based cameras, and frame-based standard cameras, in a sensor-fusion style. Event cameras are a novel class of visual sensors that generate asynchronous events when the illumination of pixels changes in the field. Compared to standard cameras, DVS holds advantages in low latency (high temporal resolution, 10μs vs 3ms), high dynamic range (140dB vs 60dB), and low power consumption (10mW vs 3W). Thus, it is specialized to capture high-speed motion, which blurs a frame-based camera. The specific objective is to merge the advantages of both cameras to address the challenge of high-speed and energy-efficient visual processing with end-to-end closed-loop control for small robots and drones. In this work, we will demonstrate our current progress in designing the fusion framework for two visual tasks, tracking and SLAM, on drone and robot cars.