Trusting your robot colleagues part two: Building trust
Blog 12th July 2022
Author: Ian Tellam, Postgraduate Researcher, School of Social Sciences, The University of Manchester
In the last part we looked at the ‘ontological status’ of robots, what kind of ‘being’ or ‘thing’ we categorise them as, and how this affects our attitudes towards them. The ways in which we relate to robots can have a significant impact on the manner in which they are perceived and subsequently accepted (or not) as they are increasingly employed to work alongside us.
For robots to truly gain a positive foothold in a hybrid workforce, then, developers should be aware of walking a fine line: build a machine that is sufficiently ‘robotic’ to be considered as such and it is not only its technical ability that will be a measure of its success, but also its acceptability among a workforce that won’t just be using these devices as tools, but working alongside them.
As a ‘boundary object’ it is not enough for a robot to be technically effective to some degree, once its nature as a ‘robot’ is accepted it crosses that boundary into the workers realm of expertise where its effectiveness will be judged by an expanded set of criteria. It will now likely be judged against a human who would be put into that role. Efficiency and reliability are factors in both cases, but building trust requires more than that, especially when robots start to adopt increasing levels of automation and decision-making.
Robot behaviour needs to be relatable, comprehensible, predictable yet able to show enough initiative to overcome the occasional unexpected situation. It needs to be trusted to not do something completely outrageous, maybe even learn from its mistakes in some fashion. Consider, for instance, that on a fundamental level human co-workers have a sense of self-preservation, and (hopefully) a concern for the well-being of their fellow employees – could a robot with no concern for its own survival ever be trusted in risky scenarios in quite the same way a human being would?
“The essence of trust is a peculiar combination of autonomy and dependency. To trust someone is to act with that person in mind, in the hope and expectation that she will do likewise – responding in ways favourable to you – so long as you do nothing to curb her autonomy to act otherwise.” (Ingold 1994:13).
There’s a well-known, if somewhat clichéd, trust experiment that used to do the rounds in various corporate team-building exercises (perhaps it still does) whereby one co-worker would stand behind another and fall backwards, with the expectation that they would be caught by the other. Participants would then swap and perform the task again, the catcher becoming the catchee. Imagine, instead, if one of those participants was a robot – how would this change the mental processes required for us to trust that we would be caught? What influences the amount of trust we have in a human in that situation?
It’s less likely to be the physical capability of the catcher, or the ability of them to understand the instructions that have been issued, these factors are probably taken for granted, or at the very least, assessed at a glance. Beyond physical dependability we also extensively consider the personality and mental state of the other person, as well as the social situation in which the scenario is taking place – it would be fairly unprofessional to drop someone on the floor, and the other team members might take a dim view of this behaviour. There is also the fact, of course, that after one catch the participants swap places, catcher becoming catchee – trust is mutual and we often depend on others because in different situations they depend on us.
But with a machine these factors which are so vital in human trust relationships are now no longer valid. Perhaps instead we would ask for a demonstration of the catching robot, maybe multiple demonstrations, and a close physical inspection and assessment as well to ensure its dependability and reliability. These physical attributes become emphasised as beyond this there is little else to assess: the robot feels no social pressure to perform, and there is no mutuality to the arrangement. And when the roles are reversed apart from the worry about breaking a potentially expensive piece of equipment, is there any other reason to bother catching the robot?
When a machine takes, at least in part, the form of a co-worker, as workplace robots often do, trust is more difficult to forge as so many of the reasons we come to trust other people, our colleagues, disappear. A robot will never consider what others think of its performance, there is no mutuality that develops in the trust relationship, and the robot does not care whether it comes to harm – let alone whether we do.
A machine ‘becomes’ a robot when its role in the workplace deems it to be such, and when this happens it moves into a category that, at least in part, overlaps with well-understood categories of both ‘worker’ and ‘tool’. When relating with a machine in such a role the traditional ways in which ‘trust’ is formed lose their pertinence. As robots become an ever more important addition to the workforce in many industries, especially in high-risk environments, we will need to adapt and develop our traditional techniques and practices of trust-forming to work effectively alongside these new, non-human, colleagues.
Read about Ian’s research.
Leave a Reply