Workshop on “Intentions in Human-Agent Interaction”
17 October 2017 (full-day workshop)
in conjunction with the
5th International Conference on
Human-Agent Interaction (HAI 2017)
CITEC, Bielefeld, Germany
We encourage submissions addressing any aspect of the role of intentions in human-agent interaction, such as the recognition, attribution, communication, etc. of intentions in human-agent interaction. In particular we would like to see contributions addressing differences in interaction with different types of artificial agents, including – but not limited to – social robots, virtual agents, and automated vehicles. We further encourage authors to be clear about the theoretical framework(s) and inherent assumptions underlying technological implementations – and their ramifications for the evaluation of the quality of human-agent interactions.
Background & Motivation
Research in the cognitive sciences, not least social neuroscience, has in the last 10-20 years made substantial progress in elucidating the mechanisms underlying the recognition of actions and intentions in natural human-human social interactions and in developing computational models of these mechanisms. However, there is much less research on the mechanisms underlying the human interpretation of the behaviour of the large variety of artificial agents we increasingly encounter in daily life, from virtual agents to social robots and automated vehicles, and the attribution of intentions to such systems.
Given the state of the art in psychology and neuroscience, there are also at least two very different intuitions that one might have:
- On the one hand, it has been well known for decades from psychological experiments that people tend to interpret even simple moving shapes in terms of more or less human-like actions and intentions. So the first intuition might be that this should also apply to artificial agents.
- On the other hand, much recent social neuroscience research, not least the discovery of the mirror (neuron) system, also points to the importance of embodiment and morphology. So the second intuition might be that humans should be able to more or less easily understand the behaviour of human-like agents and robots, but not necessarily the behaviour of industrial robots, automated vehicles or autonomous lawnmowers.
To what degree, and how precisely, each of these mechanisms might be involved in human-agent interaction remains largely unknown. This might very well vary between different types of agents, such social robots, virtual agents, or automated vehicles. It may also depend at least in part on the human perception of the agent: previous research has shown that humans adapt their behaviour according to their beliefs concerning the cognitive abilities of other agents (natural and artificial). Conversely, the same insights and intuitions are also relevant for artificial agents to recognise human intentions, which is arguably a prerequisite for pro-social behaviour, and necessary to engage in, for instance, instrumental helping or mutual collaboration. To develop artificial agents that can interact naturally and effectively with people therefore requires the creation of systems that can perceive and comprehend intentions.
Full-day workshop. Schedule details TBA.
- 17 October 2017 – workshop date
A journal has expressed an interest in publishing a special issue based on the workshop as a post-proceedings.