After data is uploaded to the cloud, devices can make predictions and perform actions using their context analysis capability based on previous conditions and models. However, to achieve this goal, devices will need to have significant computing power for data analysis and creating context models. At the same time, data security and privacy also need to be improved. Therefore, the SIGCAM, for context analysis and management, aims to develop analysis and management technologies at the system level for processing the enormous amount of data generated with M2M technology, and to address several other crucial challenges across all layers, including management and security as shown in Figure 7. The following research projects are being conducted:
With wide adoption of M2M technology, a huge amount of data will be generated with the rise of vast networks of interconnected devices equipped with sensing capabilities. This project is focused on data analysis in the context of M2M systems. Efforts are underway to scale graphical machine learning models for M2M networks. We also aim to design and implement machine learning algorithms that learn from environments, so that event detection and prediction capabilities are available even with very noisy or faulty data.
The Wu-Kong project aims to build an Intelligent Virtual Middleware for M2M applications that can automatically recognize sensors in the field and coordinate them remotely. The middleware is designed to provide an efficient and effective solution to automatically configure and transform sensor devices into the desired service components according to user context and the user-specified policy.
The project aims to build a distributed anomaly detection system for an M2M environment. A toolbox containing a set of anomaly detection algorithms will be designed and tested for various anomalies under different M2M usage scenarios. To address the challenges of the M2M environment, we propose a distributed and hierarchical anomaly detection system. Under the proposed framework, there are a set of front-end lightweight detectors and a powerful back-end analysis center. The front-end detectors are responsible for collecting data for anomaly detection and computing approximation models based on computation resource constraints. This data is forwarded to the back-end analysis center to compute more complex correlations and models. We are interested in estimating the error bound between the approximation model and the exact model. That should tell us when and how we need to update our approximation model on the platform.
Figure 7. Analysis, management, and security of the immense amounts of data generated with M2M technology.