Dataset collection in Ängen

On February, 21st 2018, MPI members organized a workshop within the E-care@Home project in order to collect datasets revolving around resting activities. These E-care@home datasets contain elements such as lying down, getting up, sleeping, turning in bed or sitting. A total of 160 datasets (between 10 and 30s each) have been collected and contain data from pressure sensors, motion sensors and shimmer sensors.

Part of the E-care@Home project aims at addressing the problem of lack of usable data. Indeed a lot of current context-recognition systems need annotated data in order to train and test their algorithms. However, very few annotated datasets exist and E-care@Home members are working to make new datasets available for researchers around the world.

The shimmer sensor, wearable used during the data collection



Activties such as sitting on the bed or lying down were recorded.








The news on the E-care@Home website:



Tutorial at AAAI 2018 (New Orleans, USA) – Feb 2-7 2018

Cognitive Vision: On Deep Semantics in Visuo-Spatial Computing

Tutorial Presenters:

Mehul Bhatt (Orebro University, and University of Bremen), and
Jakob Suchan (University of Bremen)


This tutorial at AAAI 2018 presents cognitive vision from the perspectives of language, logic, and artificial intelligence. The tutorial focusses on application areas where explainability and semantic interpretation of dynamic visuo-spatial imagery are central, e.g., for commonsense scene understanding; vision for robotics and HRI; narrative interpretation from the viewpoints of visuo-auditory perception & digital media, multimodal sensemaking of data.

We particularly highlight Deep (Visuo-Spatial) Semantics, denoting the existence of systematic formalisation and declarative programming methods -e.g., pertaining to space and motion- supporting query answering, relational learning, non-monotonic abductive inference, and embodied simulation. Here, we particularly demonstrate the integration of methods from knowledge representation and computer vision with a focus on reasoning & learning about space, action, and change. In the backdrop of areas as diverse as architecture design, cognitive film studies, cognitive robotics, and eye-tracking, this tutorial covers both applications and basic methods concerned with topics such as:

– explainable visual perception,
– semantic video understanding,
– language generation from video,
– declarative spatial reasoning, and
– computational models of narrative.


No special background needed; participants need only be generally interested in AI, Cognitive Science, or HCI. We especially encourage early doctoral researchers, and educators wanting to learn about general tools for logic-based reasoning about visual imagery.

Tutorial URL:

MPI and Pepper at “Live at Heart”

MPI and the robot Pepper were at the Festival “Live at Heart”, Örebro’s biggest music festival. More information can be found (in Swedish) on Örebro University website:

MPI at IJCAI 2017

Michael Sioutis presented the article “Towards a Synergy of Qualitative Spatio-Temporal Reasoning and Smart Environments for Assisting the Elderly at Home” (pdf) during the Qualitative Reasoning Workshop (QR2017) held at IJCAI 2017. He and Mehul Bhatt also attended the IJCAI conference.