CoDesign Lab Announcements

CoDesign  —  Cognition.  Artificial Intelligence.  Interaction.  Design.

http://www.codesign-lab.org

Rome.  Stockholm.  Warsaw.  New Orleans.  /   Feb – Sep 2018
Symposia. Workshop. Tutorials. 

##  SYMPOSIA  ##

Minds. Movement. Moving Image.  (Part 1):  On Spatial Cognition and the Built Environment
Minds. Movement. Moving Image.  (Part 2):  On Visuo-Auditory Perception and the Moving Image

Symposia convener:  Mehul Bhatt
International Conference on Spatial Cognition (ICSC)
ICSC 2018  /  Rome, Italy   /   September

TALKS BY:

Mehul Bhatt.  Ann Sloan Devlin.  Kay O’Halloran.  Paul Hemeren.
Lissa Holloway-Attaway.  Vasiliki Kondyli.  Juhyun Lee.  Daniel Levin.
Juval Portugali.  Jakob Suchan.  Barbara Tversky.  /

INVITED PANELISTS FROM:

Australia (Curtin, U. South Australia).  Germany (Bremen).
Israel (Tel Aviv).  Sweden (Skövde, Örebro).
United States (Columbia, Connecticut, Stanford, Vanderbilt).  /

Brochure   www.codesign-lab.org/www/ICSC2018.pdf

##  WORKSHOP  ##

Cognitive Vision
— Integrated Vision and AI for Embodied Perception and Interaction
Chairs:  Mehul Bhatt,  Alessandra Russo,  Parisa Kordjamshidi

The 27th International Joint Conference on Artificial Intelligence and
the 23rd European Conference on Artificial Intelligence (IJCAI-ECAI)
IJCAI-ECAI 2018  /  Stockholm, Sweden   /   July
www.codesign-lab.org/faim18-cogvis/

##  TUTORIALS  ##

Declarative Spatial Reasoning
— Theory  |  Methods  |  Applications
Presenters: Mehul Bhatt, Carl Schultz

The 27th International Joint Conference on Artificial Intelligence and
the 23rd European Conference on Artificial Intelligence (IJCAI-ECAI)
IJCAI-ECAI 2018  /  Stockholm, Sweden   /   July
Brochure    http://www.codesign-lab.eu/www/IJCAI-ECAI2018.pdf

Spatial Cognition in the Wild
— Methods for Large-Scale Behavioural Research in Visuo-Locomotive Perception
Presenters: Mehul Bhatt, Jakob Suchan

The 10th ACM Symposium on Eye Tracking Research & Applications (ETRA)
ETRA 2018  /  Warsaw, Poland   /   June
http://etra.acm.org/2018/tutorials.html#tutorial-1
Brochure   http://www.codesign-lab.eu/www/ETRA2018.pdf

Cognitive Vision
— On Deep Semantics in Visuo-Spatial Computing
Presenters: Mehul Bhatt, Jakob Suchan

The 32nd AAAI Conference on Artificial Intelligence (AAAI)
AAAI 2018  /  New Orleans, United States   /   February
https://aaai.org/Conferences/AAAI-18/aaai18tutorials/#sa1

Movecare demo accepted at AAMAS

A demo in collaboration with the University of Milano, part of the Movecare project, has been accepted for presentation at the International Conference on Autonomous Agents and Multiagent Systems (AAMAS). The demo presents the integration of a Community-Based Activity Center and a Virtual Caregiver in order to suggest games for an elderly to play, as well as possible opponents to play against. The recommendation given by the Virtual Caregiver are based on the user’s cognitive profile, preferences and previous activities as well as context information, such as temperature, humidity, time of the day…

Some videos showing the CBAC in action can be found on the Movecare Youtube Channel.

 

MPI at AI4X-Health

The 13th of March, Jennifer Renoux presented at the AI4X – Health conference. AI4X is a series of one day conferences on the theme of Artificial Intelligence (AI), its development and application in different sectors in Sweden. The conferences are organized by WASP AI funded by the Knut and Alice Wallenberg Foundation. Jennifer gave a rapid overview of health-related project currently carried on within AASS, including Ambient Assisted Living project (such as E-care@Home and Movecare) and project related to health in industrial environments (such as RAISE).

Dataset collection in Ängen

On February, 21st 2018, MPI members organized a workshop within the E-care@Home project in order to collect datasets revolving around resting activities. These E-care@home datasets contain elements such as lying down, getting up, sleeping, turning in bed or sitting. A total of 160 datasets (between 10 and 30s each) have been collected and contain data from pressure sensors, motion sensors and shimmer sensors.

Part of the E-care@Home project aims at addressing the problem of lack of usable data. Indeed a lot of current context-recognition systems need annotated data in order to train and test their algorithms. However, very few annotated datasets exist and E-care@Home members are working to make new datasets available for researchers around the world.

The shimmer sensor, wearable used during the data collection

 

 

Activties such as sitting on the bed or lying down were recorded.

 

 

 

 

 

 

 

The news on the E-care@Home website:

http://ecareathome.se/dataset-collection-at-angen/

 

 

Tutorial at AAAI 2018 (New Orleans, USA) – Feb 2-7 2018

Cognitive Vision: On Deep Semantics in Visuo-Spatial Computing

Tutorial Presenters:

Mehul Bhatt (Orebro University, and University of Bremen), and
Jakob Suchan (University of Bremen)

Description

This tutorial at AAAI 2018 presents cognitive vision from the perspectives of language, logic, and artificial intelligence. The tutorial focusses on application areas where explainability and semantic interpretation of dynamic visuo-spatial imagery are central, e.g., for commonsense scene understanding; vision for robotics and HRI; narrative interpretation from the viewpoints of visuo-auditory perception & digital media, multimodal sensemaking of data.

We particularly highlight Deep (Visuo-Spatial) Semantics, denoting the existence of systematic formalisation and declarative programming methods -e.g., pertaining to space and motion- supporting query answering, relational learning, non-monotonic abductive inference, and embodied simulation. Here, we particularly demonstrate the integration of methods from knowledge representation and computer vision with a focus on reasoning & learning about space, action, and change. In the backdrop of areas as diverse as architecture design, cognitive film studies, cognitive robotics, and eye-tracking, this tutorial covers both applications and basic methods concerned with topics such as:

– explainable visual perception,
– semantic video understanding,
– language generation from video,
– declarative spatial reasoning, and
– computational models of narrative.

Prerequisites:

No special background needed; participants need only be generally interested in AI, Cognitive Science, or HCI. We especially encourage early doctoral researchers, and educators wanting to learn about general tools for logic-based reasoning about visual imagery.

Tutorial URL:

http://cognitive-vision.org/events