Coding and Analysis Subsystems of Distributed Video Sensors

PI: Prof. Shao-Yi Chien                 

Co-PI: Dr. Chia-han Lee

Intel Champions: Dr. V Srinivasa Somayazulu & Dr. Y. K. Chen

 

An application of M2M networks we consider has clusters of video sensors/cameras capturing images (with potentially overlapping fields of view) and communicating the video data to aggregation points which fuse and analyze the data from different video sensors. The key requirements on the sensors are to minimize the energy consumption and complexity while maximizing the quality of video gathered at the aggregation points.

 

Distributed video coding (DVC) has been considered as an elegant solution to this problem for the multiple-sensor case as well as for a single-sensor case respectively.  In this project, we plan to extend the design space from algorithm design for point-to-point data transmission to system/hardware/algorithm design for content analysis and transmission in a sensor-aggregation-cloud architecture.

 

The target of this project is to develop coding and analysis subsystems of distributed video sensors that employ distributed video coding techniques to scale with the energy consumption/complexity of the sensor and aggregation nodes. An improvement of 3x–4x power efficiency is expected in this project for different cases, including cases on ASIC based platform or processor based platform.