On February, 21st 2018, MPI members organized a workshop within the E-care@Home project in order to collect datasets revolving around resting activities. These E-care@home datasets contain elements such as lying down, getting up, sleeping, turning in bed or sitting. A total of 160 datasets (between 10 and 30s each) have been collected and contain data from pressure sensors, motion sensors and shimmer sensors.
Part of the E-care@Home project aims at addressing the problem of lack of usable data. Indeed a lot of current context-recognition systems need annotated data in order to train and test their algorithms. However, very few annotated datasets exist and E-care@Home members are working to make new datasets available for researchers around the world.
Cognitive Vision: On Deep Semantics in Visuo-Spatial Computing
Mehul Bhatt (Orebro University, and University of Bremen), and
Jakob Suchan (University of Bremen)
This tutorial at AAAI 2018 presents cognitive vision from the perspectives of language, logic, and artificial intelligence. The tutorial focusses on application areas where explainability and semantic interpretation of dynamic visuo-spatial imagery are central, e.g., for commonsense scene understanding; vision for robotics and HRI; narrative interpretation from the viewpoints of visuo-auditory perception & digital media, multimodal sensemaking of data.
We particularly highlight Deep (Visuo-Spatial) Semantics, denoting the existence of systematic formalisation and declarative programming methods -e.g., pertaining to space and motion- supporting query answering, relational learning, non-monotonic abductive inference, and embodied simulation. Here, we particularly demonstrate the integration of methods from knowledge representation and computer vision with a focus on reasoning & learning about space, action, and change. In the backdrop of areas as diverse as architecture design, cognitive film studies, cognitive robotics, and eye-tracking, this tutorial covers both applications and basic methods concerned with topics such as:
– explainable visual perception,
– semantic video understanding,
– language generation from video,
– declarative spatial reasoning, and
– computational models of narrative.
No special background needed; participants need only be generally interested in AI, Cognitive Science, or HCI. We especially encourage early doctoral researchers, and educators wanting to learn about general tools for logic-based reasoning about visual imagery.
Pepper participated to Senior Festival on Saturday 7th October in Örebro Conventum Arena. He met with elderly people and interacted with them. We asked elderly people’s ideas and impressions about robots in eldercare.
Information about the event can be found on the following page:
Neziha Akalin participated 3rd Summer School on Social Human-Robot Interaction (https://hrisummerschool.org/), held in Vila Nova de Milfontes, Portugal, between the 4th and the 8th of September. The school offered to its students lectures and workshops covering different ranges of topics in social HRI.
The journal paper, titled “An Ontology-based Context-aware System for Smart Homes: E-care@home”, has been accepted in the Sensors journal, special issue “Context Aware Environments and Applications”.
The full article can be found at: http://www.mdpi.com/1424-8220/17/7/1586
Michael Sioutis presented the article “Towards a Synergy of Qualitative Spatio-Temporal Reasoning and Smart Environments for Assisting the Elderly at Home” (pdf) during the Qualitative Reasoning Workshop (QR2017) held at IJCAI 2017. He and Mehul Bhatt also attended the IJCAI conference.