Which Virtues?

February 4, 2016 | Jim Burrows

When I first started this project, several months ago, I fairly immediately focused upon virtue ethics as my approach to making personified systems behave in a trustworthy way. Today, my preferred approach is something more along the lines of prima facie duties, using a small set of virtues to provide an organizing principle as the suite of duties grows. Either approach, though, demands that we select an appropriate set of virtues.

Introduction

The precise set of virtues that is appropriate for a particular personified or artificially intelligent system, and therelationship of the virtues to each other, must, of course, be based upon the exact nature of the system in question. The virtues of a personal assistant may need to vary considerably from those of a medical care giving system or one designed for war. In this posting, I will be concentrating primarily on systems that act as assistants and companion systems with only a limited degree of autonomy, and that are not responsible for the life and death of human beings. Robot doctors, nurses, and war fighters are well outside our purview, as are full-fledged drivers. Those systems require specialized ethical systems and extended oversight.

Based upon a number of systems of human virtues, from the Scout's law to a survey of the attributes required for a butler or valet as described on websites devoted to those professions, I've identified eight virtues of a general purpose trustworthy personified system. Trustworthiness itself may be regarded as a ninth over arching virtue. The eight come in two groups and are as follows:

  1. Helpfulness
  2. Obedience
  3. Friendliness
  4. Courtesy

  5. Loyalty
  6. Candor
  7. Discretion
  8. “Propriety”

The first four virtues in the list are what I have been regarding as “utilitarian” or “functional” virtues. An assistant that is helpful, obedient, friendly, and courteous is more useful or easier to use. They map fairly directly to established UI and UX practices. 

The next four I consider broadly as “ethical” virtues. A system that is loyal, forthright, discreet, and in general trustworthy, comes as close as can be attained without full intelligence to behaving ethically, or perhaps “properly”. In this posting, I will focus on Trustworthiness and its four “ethical” subordinate virtues, laying out how they are defined both philosophically and operationally in a general sense. To actually implement them for a specific application, class or family of applications would require a much more specific and detailed operationalization than I can manage here. Still, it is important to not only understand the role of these virtues in human ethics, but their role in the operation of technological systems.

As I was researching, an additional attribute (“Propriety”) emerged as important, though exactly how is still debatable. This is the question of what degree of emotional involvement such a system should be geared for. At the extreme, it is the question of whether humans should be allowed or encouraged to love the system’s persona. Should these systems attempt to retain a degree of professional distance, or should they seek friendship and emotional involvement? I am referring to this as the virtue of “propriety” for the nonce.

Loyalty

When we speak of loyalty, there are two aspects: “whose interests are being served by the actions of the system?” and “how consistent is the answer to the first question?” Different systems will each have potentially different sets of loyalties.

Eric Horvitz’s personal assistant, Monica, which interacts with people at the door to his office, could be loyal to him, to his employer, or to the vendor that provided the system. In this case, the employer and the vendor are both Microsoft, but when the system goes commercial and John Wylie at Acme Services purchases one, it will make a substantial difference to him if it is loyal to Acme or Microsoft, or shifts inconsistently between them. Given that his assistant will have access to aspects of both John’s personal and work behavior, it is important for him to know Monica works for Acme, and not for him personally, and it will presumably be important to Acme that she is working for them and not for Microsoft.

Likewise, when George and Mary obtain a virtual nanny to look after their children, will they be told “I’m sorry, but Wendy and the boys have requested privacy” or will Nana spy on the children? How about when they ask the virtual caretaker that they bought for George’s aging mother? Does it answer to them or Grandmère? Does it matter if they just leased the system? Does that make it loyal to the company rather than the parent or the child?

What if Wendy and the boys requested privacy to smoke pot? Should Nana rat them out to their parents or to the law, or keep their secret? If Grandmère is indulging in the cooking sherry, should George and Mary be told? Should her doctor know, in case it is contraindicated given her medicine? How about the insurance company that paid for the system?

Liability, ownership, and the primary “user”/“care recipient”/etc. are all factors in deciding where the system’s loyalties belong, and potentially in deciding on a hierarchy of loyalties, or reserved privileges.

As autonomous collision avoidance becomes more sophisticated and prevalent in our autos, they will inevitably be faced with making hard choices and tradeoffs. Does the system primarily strive to protect its passengers and secondarily minimize the harm caused to others? Or does it willingly sacrifice the safety of the passengers when doing so protects a larger number of bystanders? Would a privately owned auto behave differently from a commercial limousine, or a municipal bus? Does the action of an autonomous long-haul truck depend upon the nature and value of the cargo? In short, is the auto driver loyal to its passengers, owner, manufacturer, insurer, or society as a whole?

It is worth noting that, to the extent that software engineers or architects build in explicit trade-offs in advance for dealing with emergencies, they are faced with an increased responsibility and liability as compared with a human driver responding in the heat of the moment. In the fraction of a second that a driver faces a disaster and reacts, perhaps with little or no conscious thought, we generally don’t hold them fully responsible for the decision to veer right or left. However, if a programer creates an explicit set of priorities, or a QA engineer approves the shipment of an AI that has learned or derived a specific set of priorities, then those acts are made deliberately and with time to weigh and choose the consequences. 

This means that the introduction of personified and semi-autonomous systems actually introduces issues of responsibility and liability beyond those that would apply if an unassisted human were solely involved. How this additional burden of responsibility is handled is unknown at present and will remain so until laws are passed and precedents are set in our courts. Thus the legal and economic pressures around determining the loyalties of personified systems and the AGIs that follow after them will be in flux for the foreseeable future

Candor

The virtue of candor in a personified system may be considered an elaboration of the property of “transparency” that we are used to in discussing software and business in general. A candid system is one that makes its loyalties and its actions clear to the parties involved: its client, patient, boss, owner, or the like. Someone being served by a candid personified system should be aware of what that system’s priorities and loyalties are and what actions the  system has taken, in general. It need not report every minor routine action, but it should insure that the person(s) served know in general the sorts of actions it routinely performs, who controls it and sets its policies and priorities, and what those priorities are. It should not be evasive if additional details are requested.

Conflicting loyalties may well be inevitable, as noted above. As the realities of liability surrounding autonomous agents develop, manufacturers are likely to reserve certain privileges. Similarly, in the case of care-taking systems that are provided through health or other insurance, the insurer or other agency that pays for or provides the system may reserve some rights or demand certain priorities and limitations. Personal assistants may be constrained or controlled by the individual user, their employer, or the vendor providing the system, and the cloud-based or other services that are used to power it.

These ambiguities and conflicts in system loyalty will be tolerable only if the user is clearly aware of them. In other words, the system must be candid in revealing its loyalties, priorities, limitations, capabilities, and actions. 

In considering the virtues and behaviors of hypothetical fully intelligent or moral agents, one of the potential virtues is honesty. For the purposes of the present effort, which is limited to the “virtues” and behaviors of more limited personified agents, I have lumped honesty and candor into a single category. True dishonesty requires intention. A lie is not merely a falsehood, but an intentional and deceptive falsehood. Pre-intelligent systems do not possess true intent. As such, I am subsuming “honesty” into “candor” in this discussion. For true AGIs, they might well need to be separate.

An important aspect of candor is taking an effort to be understood, and not merely to recite information. A candid system should speak simply and briefly in plain language, allowing or encouraging requests for further explanations, elaborations, or details. Reading the system’s terms of service or privacy policy aloud is not, in fact, particularly informative. Responding with a simplified plain-language summary, and asking whether more details are required, would be much better and more candid.

Discretion

The need for discretion in digital virtual assistants is underscored by the following tension. On the one hand, more and more of our confidential information is making its way into our various computers and mobile devices, and we need to protect the most sensitive information about us from leaking out. On the other hand, the phrase “information economy” is becoming more and more literally true. We have come to view access to network and computer services as something that we do not pay for with money, but rather with information about us and our habits. More than that, what was once mere data has become information by being collected, correlated, and analyzed, and as we begin talking to personalized systems with voice recognition, that information is being parsed and analyzed semantically to the point where it represents real knowledge about us.

In the typical voice-driven system like Siri, Cortana, Google Now, or Amazon Echo, the attention phrase that allows the system to know we are talking to it, (“Hey Siri”, “Cortana”, “OK, Google” or “Alexa”, respectively), is recognized locally in the device, but the commands and queries addressed to the system are sent to a remote server in the cloud where they are analyzed semantically, using general and specific grammars, the user’s contacts and other information, and the context of recent commands and queries. The system identifies the actual subject matter being spoken about and uses that to distinguish which among a number of similar sounding words is intended. Transforming simple audio data into parsed and analyzed semantic knowledge, with meaning, makes the information that much more valuable. And access to it that much more intrusive.

On the other horn of our dilemma, accessing services without paying for them, either monetarily or with information, is a failure to participate in the economy of the Internet, at best, and theft or its moral equivalent at worst. The implicit deal is that systems provide us information, entertainment, connectivity, and the like, and we pay for it with information about ourselves. If we refuse to provide any information, we are pulling out of or leeching off of the economy, but if we pay for access and services with our most vital secrets, we are taking huge risks.

A discreet real life butler, faced with this situation, would protect our secrets, family matters, confidences, and business secrets, and pay for services with the most innocuous bits of information, and those directly involved with specifying and obtaining the desired services. He would be frugal with the master’s money and information, exchanging it only for true received value, and with an understanding of each bit’s value and risks.

While the local system is unlikely to be anywhere near as good at analyzing our utterances, commands, and queries as the remote servers, it can do some analysis and inferences, and we can label questions and other information. It can also ask for clarification of the degree of confidentiality. One can readily imagine saying, “Jeeves, make discreet inquiries into the cost of …” or, using different attention words for different assistants that handle different aspects of our lives and so forth. Creating more intelligent systems capable of an amount of discretion should be possible.

Discretion can, and should, be combined with candor. A superior system would be one that not only makes intelligent distinctions between the confidential and the trivial, but should allow us to know the distinctions and priorities that it is using.

Propriety

At first blush, it would seem that one of the desirable characteristics of a personified system is for it to be emotionally engaging—friendly or even lovable. This seems to be a natural outgrowth of creating “user friendly”, approachable, and generally easy-to-use software. It is certainly hard to see any virtue in creating repulsive, hostile, or hard-to-use systems. 

Our fiction is full of lovable robots and artificial people from Tik-Tok of Oz to Astro Boy, the Jetson’s Rosie, Rhoda of “My Living Doll”, Buck Rogers’ Twiki’, Doctor Who’s K-9, David and Teddy of "A.I.”, Baymax of “Big Hero 6”, and Wall-E.  Then, of course, there are the sexy robots from Maria of “Metropolis”, to Rhoda, to Ava of “Ex Machina”. Many of Hollywoods sexiest actresses have played robots, gynoids, and fembots.

However, upon closer consideration, being emotionally appealing, engaging, and even seductive may not be as positive characteristics as they seem at first blush. Matthias Scheutz and Blay Whitby each discuss some of the negative aspects of too much emotional attachment in their chapters of “Robot Ethics”.

These are not new issues, nor peculiar to autonomous systems. For example: military and other hierarchical organizations have non-fraternization regulations; medical ethics discourage doctors from treating family members and friends; most care-giving professions have rules against romantic, sexual, and other intimate relationships between care-givers and their charges.

Children offer a particularly sensitive area. On the one hand, it would be highly undesirable for interactions between a child and a personified system to replace socializing and bonding with actual people. On the other hand, it it is quite conceivable that very shy children or those on the autism spectrum might be able to use the simpler and less threatening relationships with personified systems as a stepping stone to more complex relationships with other people.

All of this leads us to the concept of "propriety". A personified system should strike a balance—use an appropriate degree of emotional involvement. It should strive to be neither so cold and distant as to be unapproachable and difficult to interact with, nor give a deceptive appearance of mutual emotional involvement that cannot be delivered.

This is a topic that deserves a whole posting to itself.

Trustworthiness

Pulling these virtues together, we can build a picture of trustworthiness. A good personified assistant should behave professionally, fitting into its environment appropriately, should have explicit loyalties, be candid about them and act upon them with discretion. One can see how this might be suited to an implementation modeled after the prima facie duties of the Andersons (see "Which Ethics? — Deontology") and the notion of an explicit deciding principle to guide the tradeoffs between them. 

Candor requires some mechanism by which the system can, among other things, explain its decision making process and principles. Several of the systems explored in the last few postings provide mechanisms to enable this. Bringsjord (see "Which Ethics? — Deontology" again) has demonstrated DCEC-to-English as well as English-to-DCEC translation. The Andersons' GenEth system for deriving principles from cases studies also results in explicit rules that can be laid out using the language supplied to describe the test cases. Finally Winfield's consequence engine (see "Which Ethics? — Consequentialism") can cite specific consequences that caused actions to be rejected. The trick with all of these will be simplifying the explanations down to a level that is commensurate with true candor.

Aristotle's view of virtues is that they represented a mean between two extremes, and that is a theme that comes up here as well. A candid explanation, for instance must be balanced between overwhelming detail that could mask falsehoods and unpleasant truths, and a brevity that carries insufficient detail needed for understanding. Participating discretely in the information economy again balances disclosing too much and too little, and propriety requires balance between coldness and an illusion of impossible human connection.