A. Subramanya, A. Raj, J. Bilmes, and D. Fox.

Recognizing Activities and Spatial Context Using Wearable Sensors

Proc. of Conference on Uncertainty in AI (UAI), 2006


 


Abstract

We introduce a new dynamic model with the capability of recognizing both activities that an individual is performing as well as where that individual is located. Our approach is novel in that it utilizes a dynamic graphical model to jointly estimate both activity and spatial context over time based on the simultaneous use of asynchronous observations consisting of GPS measurements, and a small mountable sensor board. Joint inference is quite desirable as it has the ability to improve accuracy of the model and consistency of the location and activity estimates. The parameters of our model are trained on partially labeled data. We apply virtual evidence to improve data annotation, giving the user high flexibility when labeling training data. We present results indicating the performance gains achieved by virtual evidence for data annotation and the joint inference performed by our system.


Download

Full paper [pdf] (931 kb, 9 pages)


 



[To the RSE-lab]