Our trust in autonomous systems

March 17, 2016 | Jim Burrows

In his latest edition of “AI in the News” (#15, March 2, 2016), Earl linked to a story in the New Scientist entitled “People will follow a robot in an emergency – even if it’s wrong”, and commented: “They had better be trustworthy... because we do, in fact, trust them.”

Our-trust-in-autonomous-systems-img-1

This is an issue that has bothered me for most of the time that I have been working on my “Personified Systems” project. The very premise of my project was that if we are going to be dealing with more and more systems in ways that are more and more like the ways we deal with fellow humans, rather than merely as tools, then those systems will have to “play well with others”. They will have to behave like good citizens, good people. They will need to be trustworthy. How do we make them be worthy of trust, so that we can in good conscience encourage people to trust them?

But what does it do to that premise if we already trust autonomous systems, personified systems? What does it mean that even before we take special efforts to make them worthy of trust, people trust them, especially if they trust them more than actual people? Ultimately, I decided that we need to insure that our pseudo-people not have feet of clay, that people may lose that trust if we don’t insure that it is warranted, and that we should not squander the good will that personified systems have.

So, what do we know about our trust in robots and other personified systems? First off, from the study that Earl cited we know that people will follow a robot in a perceived emergency situation, even though its directions are contradicted by signs and the way that it is directing them is partially blocked and dark, and that this is true even if they have seen the robot fail earlier and been told that it is having problems.

Beyond that, there are studies that show that humans will happily take direction from robot teammates, and can be pressured to perform unpleasant tasks by robot supervisors. Additionally, people trust computers acting as journalists and rate their work as more trustworthy. They also prefer trusting machines in certain financial decisions. Finally, humanizing robots and autonomous systems makes people more likely to coöperate with them, and to trust them.

Recent studies seem to indicate that we trust computers and robots with our safety and our money, and that we are willing to follow their direction and trust them to inform us, even if they are a bit boring and less interesting. I’ve summarized the various pieces of research below, and given links to the studies and reportage on them (apparently all written by humans). Those are the observed effects. What are the explanations and the implications?

I can think of a couple of possible causes. First, there could be a “honeymoon effect”. Robots, computers and autonomous systems are relatively new. We have not learned all the ways that they could disappoint us and so they still have the benefit of the doubt. If that’s the case, then it could wear off. That suggests that those of us in the high tech community should take advantage of the slack we are being cut, and should work to make sure that our creations live up to people’s expectations.

A second possibility is that since machines are unemotional and have no motives of their own, either for good or for evil, we tend to believe that they are objective. Given that, it would seem that if we trust machines, which we take to be neutral, more than humans, either we think that humans are in general more more malicious than beneficent or at least more fallible, or we are terribly risk averse, more concerned about risks than benefits. This is consonant with the tendency for our politics to be increasingly divisive, and what appears to be a spread of mutual distrust. Since trust is the foundation of civilization, we really cannot afford to have it break down in a major way.

If this second possibility is true, if we now distrust each other to a dangerous degree, but we still trust machines, then we technologists are offered an opportunity: the chance to use our trust in machines to help us rebuild our trust in each other. This is actually a somewhat tricky proposition. We risk collapsing the machine trust down to the level of inter-human trust rather than building up our trust in humans. Again, we need to be careful to make sure that our creations live up to people’s expectations.

Besides suggesting that my quest to make personified and other autonomous systems worthy of trust is highly worthwhile, this line of thinking also suggests another couple of projects to me—for instance, one to use automation to start breaking down the bubbles that separate us and in doing so encourage mutual understanding and thus trust. I’ll go into that later. For now, here’s a short summary of some of the research in question.

The Research

Science 2.0 recently published an article entitled “Most People Trust Computers More Than Their Business Partners” which reported on a study in which subjects played a financial game with small real world winnings and shared either a small amount evenly with their partner or a larger amount that was divided with their partner by a machine. Even though the computers were programmed to do the division exactly the same way as the real test subjects, and the subjects knew that, they went for the higher amount more often if the decision was performed by the machine. The explanation seems to be that people were averse to being disappointed by a person.

A Psychology Today article headlined “Why Some People Would Prefer a Robot to a Human Boss”, in a wide ranging discussion of the potential for robots displacing various types of workers, cites a couple of studies on the topic of robots directing human workers. The first study, “Decision-Making Authority, Team Efficiency and Human Worker Satisfaction in Mixed Human-Robot Teams” conducted at MIT, showed that in a mock manufacturing task, not only efficiency, but human satisfaction was maximized when the robot was responsible for all scheduling and coördinating decisions.

In the second study, “Would You Do as a Robot Commands? An Obedience Study for Human-Robot Interaction” conducted at the University of Manitoba, subjects were given a large amount of tedious work to do, which resulted in protesting. A human researcher or an Nao robot had the task of persuading the subjects to continue despite their protest. While the robot was less effective in persuading the subjects, it was successful 46% of the time. This shows that even when the task is unpleasant and not shared by the robot, humans do accept robot direction.

In the area of journalism, Vice covered a study by Christer Clerwall, in an article titled “People Think Computer Journalists Are More Trustworthy Than Human Ones”. The findings of that study were expanded upon in another study written up in an American Journalism Review article, “Robot Reporters or Human Journalists: Who Do You Trust More?” In the first study, subjects were asked to rate stories, some of which were written by humans and some of which were machine-generated. The journalist-written text was rated as more well written, coherent, and pleasant to read, and the computer-written were rated as more boring, informative and trustworthy. These were, however, all relatively minor differences, and on the whole the users did not seem to discern a major difference between humans and computers (see table below).

Our-trust-in-autonomous-systems-Fig-1

 Figure 1: Mean rank values for each descriptor for each group (journalist or software)

In the second study, the test materials were identical except that, for half of the subjects, the author was described as a journalist, and for the other half, it was attributed to a computer. For most subjects, the evaluation of the trustworthiness, credibility and expertise were the same for those who thought the author was a computer and those who thought it was a human. The only exception was for subjects who were, themselves, journalists. They differed in two ways. First, they rated the supposedly human journalists as more credible than the supposed computers. Second, they rated the expertise of the computers higher than did the non-journalists.

[This article was previously published as a note on Facebook.]