Would a robot trust you? Developmental robotics model of trust and theory of mind
Authors: Samuele Vinanzi, Massimiliano Patacchiola, Antonio Chella, Angelo Cangelosi
Journal: Philosophical Transactions of the Royal Society B
Publication Date: 11 March, 2019
Department of: Computer Science
A robotics model of trust and theory of mind
Researchers from the School of Computer Science at the University of Manchester have developed a novel computation model of Theory of Mind to support trust in human-robot interaction. Various studies have examined how trust is attributed by people to robots, but there are no current models that investigate the opposite scenario, where a robot is the trustor and a human is the trustee. This research proposes an artificial cognitive architecture based on the developmental robotics paradigm that can estimate the trustworthiness of its human interactors for the purpose of decision making. This is accomplished using Theory of Mind, the psychological ability to assign to others beliefs and intentions that can differ from one’s owns. The novel ognitive architecture integrates a probabilistic Theory of Mind and trust model supported by an episodic memory system. The architecture was tested using developmental psychological experimental paradigms, and results demonstrate a new method to enhance the quality of human and robot collaborations. This work will allow autonomous robots to evaluate the trustworthiness of its sources of information for joint task scenarios where people and robots must collaborate to reach shared goals.
- Developmental robotics models: the acquisition of cognitive skills by simulating developmental psychology experiments.
- Trust is a critical issue in human-robot interactions as robotic systems gain complexity, it becomes crucial for them to be able to blend in our society by maximizing their acceptability and reliability.
- Theory of Mind is the psychological ability to assign to others beliefs and intentions and is used to establish trust.