The goal of the lab is to develop methods and techniques to extract meaningful information from sensor data where such data emerges from complex physical systems such as robots and/or sensor networks. The extraction of this information is to facilitate good interaction between humans and systems, as well as between systems. Our investigations aim to provide semantically rich representations of data using AI techniques within machine learning and automated reasoning. We further study the impact of our methods by studying the quality of interaction it provides with humans and other agents. Our research is situated in a number of different topics and fields where we work together with industry and other academic partners, and include social robotics, smart home environments, multi-agent systems and more.
Our research focuses on three different themes:
- Representation Learning
- Semantic Perception
- Human-Systems Interaction
This research theme aims to learn good feature representations from raw sensory data. Developing suitable representations is the key to achive a desired learning system, that either generalizes well to new data and is ideally capable of describing the complicated data to the user with the help of human-interpretable representations. The process of learning representations can be knowledge-driven or data-driven, be learned from unlabeled or labeled data, and be with or without human interaction. The representations can be on different levels of abstraction to facilitate bridging the gap between low-level sensory data and high-level abstract concepts.
- deep learning
- neural network models
- probabilistic graphical models
- interactive learning
- interpretable representation
- multi-level abstraction
Semantic perception, as one of our central research themes, refers to any perceptual process semantically enabled for different purposes. The semantic perception has increasingly being considered as a salient feature, and at the same time, the main challenge in the development of intelligent interactive systems. The focus of our research in this theme concerns methods and approaches that aim towards bridging the gap between perceptual data and meaningful semantic knowledge. More specifically, our research range over machine sensing modalities, techniques for extracting and learning perceptual representations, and semantic grounding (or anchoring), in order to derive representative semantic knowledge about the perceived world (from the viewpoint of a machine). Another aspect of our interest in semantic perception is on the autonomous acquisition of (context-related) knowledge used to enrich the representation of a given set of data. For the purpose of automating the knowledge acquisition process, our approach in semantic representation is relying on ontological techniques. However, due to the inevitable and non-trivial trade-off between expressivity of representation models and the complexity of the reasoning process upon the semantics, our research concerns, last but not least, the development of reasoning techniques applicable to the represented semantics of the perceived data.
- semantic knowledge
- ontological techniques
- knowledge representation and reasoning
- perceptual anchoring
- machine sensing modalities.
The work in this research direction contributes to the state-of-the-art in research areas such as social robotics, human-machine interaction and multi-agent systems. High-level semantic representations enables us to create interaction and collaboration between humans and robots, virtual characters, software and simulated agents. Our focus lies not just on creating those effective and intuitive interactive setups but also on evaluating them and predicting the effects of the interaction. The tools that we apply range from empirically grounded usability analysis to multi-actor systems and agent-based simulation.
In the area of interaction analysis, qualitative and quantitative tools of varying complexity which study relevant aspects in the particular application domain are applied. Examples taken from the mobile robotic telepresence systems domain include proxemics (spatial relationships), sociometry (conversation characteristics), presence, usability, and non-verbal cues (e.g. facial expressions and glare). Other examples taken from the domain of sensor networks for home environments include usage statistics and behavioral changes.
Interaction plays a central role in multi-agent systems. When and how actors interact determines the actual outcome of the overall system. Using agent-based simulation, we analyze complex socio-technical systems (e.g. in logistics) or predict future development of complex systems comprising diverse actors. Our research hereby relates to engineering complex models with theory and data-driven approaches. Frameworks for capturing complex interaction, for human immersion into simulated worlds as well as for sensor-based alignment of a running simulation to real-world dynamics form the current focus of research.
- interaction quality
- development of evaluation methods
- multi-actor systems
- agent-based simulation