MPI at AI4X-Health

The 13th of March, Jennifer Renoux presented at the AI4X – Health conference. AI4X is a series of one day conferences on the theme of Artificial Intelligence (AI), its development and application in different sectors in Sweden. The conferences are organized by WASP AI funded by the Knut and Alice Wallenberg Foundation. Jennifer gave a rapid overview of health-related project currently carried on within AASS, including Ambient Assisted Living project (such as E-care@Home and Movecare) and project related to health in industrial environments (such as RAISE).

Dataset collection in Ängen

On February, 21st 2018, MPI members organized a workshop within the E-care@Home project in order to collect datasets revolving around resting activities. These E-care@home datasets contain elements such as lying down, getting up, sleeping, turning in bed or sitting. A total of 160 datasets (between 10 and 30s each) have been collected and contain data from pressure sensors, motion sensors and shimmer sensors.

Part of the E-care@Home project aims at addressing the problem of lack of usable data. Indeed a lot of current context-recognition systems need annotated data in order to train and test their algorithms. However, very few annotated datasets exist and E-care@Home members are working to make new datasets available for researchers around the world.

The shimmer sensor, wearable used during the data collection



Activties such as sitting on the bed or lying down were recorded.








The news on the E-care@Home website:



Our Research featured in MIT Technology Review

Our latest work has been featured in MIT Technology Review!

“DNA Molecular Storage System: Transferring Digitally Encoded Information through Bacterial Nanonetworks” is the result of a collaboration between Örebro University, University of Padua, Waterford Institute of Technology, and Tampere University of Technology.

Find the full article on this page:

Tutorial at AAAI 2018 (New Orleans, USA) – Feb 2-7 2018

Cognitive Vision: On Deep Semantics in Visuo-Spatial Computing

Tutorial Presenters:

Mehul Bhatt (Orebro University, and University of Bremen), and
Jakob Suchan (University of Bremen)


This tutorial at AAAI 2018 presents cognitive vision from the perspectives of language, logic, and artificial intelligence. The tutorial focusses on application areas where explainability and semantic interpretation of dynamic visuo-spatial imagery are central, e.g., for commonsense scene understanding; vision for robotics and HRI; narrative interpretation from the viewpoints of visuo-auditory perception & digital media, multimodal sensemaking of data.

We particularly highlight Deep (Visuo-Spatial) Semantics, denoting the existence of systematic formalisation and declarative programming methods -e.g., pertaining to space and motion- supporting query answering, relational learning, non-monotonic abductive inference, and embodied simulation. Here, we particularly demonstrate the integration of methods from knowledge representation and computer vision with a focus on reasoning & learning about space, action, and change. In the backdrop of areas as diverse as architecture design, cognitive film studies, cognitive robotics, and eye-tracking, this tutorial covers both applications and basic methods concerned with topics such as:

– explainable visual perception,
– semantic video understanding,
– language generation from video,
– declarative spatial reasoning, and
– computational models of narrative.


No special background needed; participants need only be generally interested in AI, Cognitive Science, or HCI. We especially encourage early doctoral researchers, and educators wanting to learn about general tools for logic-based reasoning about visual imagery.

Tutorial URL: