The daily activity patterns of individuals are associated to a person's mental and physical health. Thus, identifying changes in these patterns may reveal the progress of diseases, or indicate the response of a patient to a specific treatment. Furthermore, being aware of what a person currently does enables smart assistants to provide context-aware feedback and support in daily life. By means of smart phones and other wearable sensing devices, the Wearable Computing Laboratory studies various methods for recognizing a person's activities without the need for environment-installed infrastructure.
Some of the current research topics and the corresponding contacts are listed below.
Activity-based Simultaneous Localization And Mapping (ActionSLAM) is a system for fully-standalone, day-long indoor position tracking from body-worn motion sensors. To account for the accumulating error of open-loop tracking, ActionSLAM uses the recognition of location-specific activities to reset position estimates when a person revisits a location.
This Figure (see 3D ActionSLAM: wearable person tracking in multi-floor environments, Michael Hardegger, Daniel Roggen and Gerhard Tröster (2014), in: Personal and Ubiquitous Computing) depicts the ActionSLAM paths of 4 participants wearing motion sensors at their feet, and the narrative chart that could be derived from the paths.
Contact: Michael Hardegger
Topic models originate from text mining and are used to discover hidden themes from word statistics in a corpus of documents. We investigate the application of topic models to activity routine discovery from sensor data. We focus on the development of discovery methods and evaluate our approach in a simulated environment as well as in real-world studies. In particular, we monitored hemiparetic rehabilitation patients in a day care rehabilitation center using wearable motion sensors. Patient daily routine patterns such as lunch, socialising, rest and cognitive training were successfully inferred from sensor data using topic models.
Contact: Julia Seiter
In gesture recognition systems, training data are carefully annotated by experts in which the start and end times and the corresponding labels are specified. The traditional annotation by a few number of experts is accurate but very time consuming and non-scalable. We investigate a new annotation technique by using crowdsourcing in which the annotation tasks are distributed to a crowd of ordinary people -- low-commitment labelers. Crowdsourced annotations suffer from annotation noise such as mis-labeling or inaccurate identification of start and end time of gesture instances. To overcome these noise, we propose new recognition methods (SegmentedLCSS and WarpingLCSS) and show their robustness to crowdsourced annotation noises.
This figure depicts a taxonomy of crowdsourced annotation noise (For more information, see Robust Online Gesture Recognition with Crowdsourced Annotations , Long-Van Nguyen-Dinh, Alberto Calatroni and Gerhard Tröster in: Journal of Machine Learning Research (JMLR), 2014)
Contact: Long-Van Nguyen Dinh
The bottom part of this figure shows a visualization of the collected data from crowd-sourced audio data from Freesound and one user's audio data on mobile phone of car context class. As you can see, the crowd-sourced data represents well some parts of the user data.
For more information, see Towards Scalable Activity Recognition: Adapting Zero-Effort Crowdsourced Acoustic Models , Long-Van Nguyen-Dinh, Ulf Blanke and Gerhard Tröster in: Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia (MUM 2013).
Contact: Long-Van Nguyen Dinh
Recent off-the-shelf phones integrate ambient sensors such as temperature, humidity, and pressure, along with the already existent light sensor. Currently, these sensors are rarely used other than to display raw information about phone’s environment or for barometer-based weather forecasting.
Semantic Context Recognition. Motivated by observations such as in the following picture, where ambient phone's sensors such as temperature and humidity can capture the change of user's context such as from office to outdoors or car, and moreover the context ambient fingerprint, we study whether these extremely low-power phone sensors are useful for semantic user context recognition.
Indoor Location Awareness. Further, we aim to determine whether we can use the phone’s ambient sensors for semantic indoor localization, under the hypothesis that different rooms in a residence tend to have different ambient properties, as shown also in the figure.
An example of how phone's ambient sensors (temperature, humidity, pressure and light) data change for different user's semantic context or for different rooms in a home. (see Low-Power Ambient Sensing in Smartphones for Continuous Semantic Localization, by S. Mazilu, U. Blanke, A. Calatroni, and G. Troester. In AmI, 2013)
Contact: Sinziana Mazilu
Diese Website wird in älteren Versionen von Netscape ohne graphische Elemente dargestellt. Die Funktionalität der Website ist aber trotzdem gewährleistet. Wenn Sie diese Website regelmässig benutzen, empfehlen wir Ihnen, auf Ihrem Computer einen aktuellen Browser zu installieren. Weitere Informationen finden Sie auf
The content in this site is accessible to any browser or Internet device, however, some graphics will display correctly only in the newer versions of Netscape. To get the most out of our site we suggest you upgrade to a newer browser.