We are currently organizing a workshop on “Intentions in Human-Agent Interaction“, to be held on 17 October 2017 in conjunction with the 5th International Conference on Human-Agent Interaction (HAI 2017), Bielefeld, Germany, 17-20 October 2017.
We are currently guest-editing a special issue/research topic on “Intentions in Human-Robot Interaction“ for the journal Frontiers in Neurorobotics. The manuscript submission deadline has passed.
- workshop on “The Role of Intentions in HRI” at HRI 2017, Vienna, 6 March 2017
- workshop on “Communicating Intentions in HRI” at RO-MAN 2016, New York, 31 August 2016
- workshop on “Intention Recognition in HRI” at HRI 2016, Christchurch, New Zealand, 7 March 2016
Background & Motivation
Research in the cognitive sciences, not least social neuroscience, has in the last 10-20 years made substantial progress in elucidating the mechanisms underlying the recognition of actions and intentions in natural human-human social interactions and in developing computational models of these mechanisms. However, there is much less research on the mechanisms underlying the human interpretation of the behaviour of artefacts, such as robots or automated vehicles, and the attribution of intentions to such systems.
Given the state of the art in psychology and neuroscience, there are also at least two very different intuitions that one might have:
- On the one hand it has been well known for decades from psychological experiments that people tend to interpret even simple moving shapes in terms of more or less human-like actions and intentions. So the first intuition could be that this should also apply to robots and other autonomous systems.
- On the other hand, much (social) neuroscience research in the last 10-20 years, not least the discovery of the so-called mirror (neuron) system, also points to the importance of embodiment and morphological differences, which might lead to the intuition that humans might be able to more or less easily understand the behaviour of very human-like robots, but not necessarily the behaviour of, for example, autonomous lawnmowers or automated vehicles.
To what degree, and how precisely, each of these mechanisms might be involved when interacting with artificial agents remains unknown. It may, for instance, depend at least in part on the human perception of the agent: previous research has shown that humans adapt their behaviour according to their beliefs of the cognitive abilities of another (even artificial) agent and we have previously suggested that such agents need to be understood in terms of how socially interactive they are, and how tool-like their purpose is.
Conversely, the same insights and intuitions are also relevant for robot recognition of human intentions, which is a arguably a prerequisite for pro-social behaviour, and necessary to engage in, for instance, instrumental helping or mutual collaboration. To develop robots that can interact naturally and effectively with people therefore requires the creation of systems that can perceive and comprehend intentions in other agents.
For research on human interaction with artificial agents such as robots in general, and mutual action/intention recognition in particular, it is therefore important to be clear about the theoretical framework(s) and inherent assumptions underlying technological implementations. This has further ramifications for the evaluation of the quality of the interaction (as opposed to the functioning of the robot itself) between humans and robots.