The Roadmap, part 1
In this article, I will outline what I would do if designing the behavior of trustworthy personified systems was a major research and development effort within a large well-funded R&D organization. In my next article, I will outline what I have been and am continuing to do on my own, during my current “sabbatical”.
To do the job properly, to identify what principles of system behavior should be adopted and then begin to implement them is a major R&D effort that crosses a great many of the disciplines that I have studied and practiced during my career. It is certainly well beyond the scope of a short, one-man project. In this section I will outline the types of research and development activities that would be involved. In the real world, any enterprise or institution that mounted an R&D project or program in this space would probably scope the effort differently, and while it is quite possible to envision that scope being larger, it is more likely that it would be narrower than what I am describing here. Still, it seems worthwhile to look at it in a large generalized context, at least until I can identify a plan to realize some piece of this personally.
Business: Choose a focus
“Personified systems”, as I have defined them, cover a huge array of existing and emerging products. Any actual R&D project is likely to address only a subset of them, chosen according to the business and research requirements of the enterprise or institution doing the research and/or development.
Some of the types of systems that are included in the general category are:
- Generalized personal assistants: This category includes such systems as Apple’s Siri, Google Now, Microsoft’s Cortana, and Amazon’s Echo, aka “Alexa”. These systems respond to voice and text commands and queries given in a quasi-natural language for human/machine dialogs, and perform tasks such as looking up information, performing calculations, creating and scheduling reminders and calendar events, sending emails and text messages, taking notes, creating shopping and task lists, and the various functions that are becoming the common daily uses of computers and mobile devices.
- Specialize virtual executive assistants: This is a more advanced version of the previous category and includes systems such as “Monica”, the virtual assistant of Eric Horvitz at Microsoft Research. Whereas the simpler assistants of the previous category interact solely with the user they serve, Monica, assisted by a small Nao robot and other systems at Microsoft Research deals with other people on her user’s behalf. Monica greets visitors to Horvitz’s door, schedules appointments and in general fills a role that might otherwise have been filled by a secretary or executive assistant.
- Companions for children: This category includes both educational and entertainment systems that serve the roles of companion or nanny. It includes devices such as the Cognitoys green dinosaur, Mattel's "Hello Barbie”, ToyTalk’s “The Winston Show”, and many others. These systems differ, both in the fact that the user is a child and that the customer is not the user. While true “virtual nannies”, robotic systems given some form of responsibility, are still in the future, there are R&D projects headed in that direction, and their behavior will be important enough that we want to be thinking about it now.
- Virtual care givers: The Japanese in particular have been expending considerable effort building care-giving systems, especially for the aged. As the median age of the populace is going up and the number of children in families is going down, the demands and requirements for the care of the old and the infirm is growing. While autonomous systems capable of taking on these duties fully are well in the future, there are many ways that autonomous systems might assist and complement human caregivers.
- Expert system virtual assistants: Many professions are making increased use of automated virtual assistants. Virtual medical assistants are helping with diagnosis, prescribing and managing drug regimes and other tasks. In the financial world, autonomous systems not only give advice, but have taken over many aspects of stock and commodity trading. In the legal field, the tools have been shifting, from simple search to autonomous systems assisting or replacing aspects of a law clerk’s job. All of these professions deal with confidential and sensitive information, and a require high degree of trust.
- Autonomous and semi-autonomous vehicles: More and more autonomous driver-assist and self-driving features are appearing not only in research and concept vehicles, but in production vehicles, and on our roads. These systems are entrusted with human safety and operate dangerous machinery. As such, they are taking on nearly human responsibilities and require substantial trust. In addition to cars, commercial airliners are already flying under automated control for the vast majority of flight time.
- Home appliances and systems: The “Internet of Things” is growing rapidly, with everything from thermostats and alarm systems to lights and entertainment systems being automated and coordinated. Voice response systems monitoring and controlling these devices are becoming more sophisticated and are merging with the general purpose virtual assistants.
- Games and Toys: in addition to the talking toys and games mentioned under the “Companions for children” category, AIs act as players and non-player elements of many games. While it is not clear how critical trust is in these systems, they do play an increasing role in the public’s dealing with and understanding of autonomous systems, and may influence opinions and expectations well beyond the limits of the games they inhabit. Additionally, there is the whole area of adult and sexual toys and entertainment. Here, issues of trust and confidentiality may be quite important.
Sociology and Psychology
There are a number of social and psychological issues that need to be addressed either through the expertise of the participants in the team or through explicit research. These questions come in the areas both of the broad background and context in which the systems operate and in the specific interactions of the individual systems being studied and developed in the area of focus. Areas that need to be covered are:
Societal expectations: How do people think of personified systems, robots and the like. What are our expectations of them?
Natural man/machine dialogs: Given that background, how do people speak to and interact with machines. In many ways this is similar to how we interact with each other, but research shows that knowing something to be a machine alters how we speak to and interact with it. Part of this is due to the different capabilities of machines which are not yet fully intelligent. In part some of it is due to the expectations that society, fiction and media set. Finally, some of it is because of the different role that artificial systems play.
Impact upon us: for the foreseeable future, personified systems and AGIs will serve a sub-human role in society. This is likely to be so even for AGIs until they not only are, but are accepted by society and the law as autonomous moral agents deserving of rights. This role as “sub-human” will have an impact on us. As we treat them as both persons or person-like, but at the same time as inferior, it is likely to have impact on how we deal with other humans. Will it be a pressure upon us to go back to thinking of some people as sub-human persons or will it clarify the line between all humans, who are full persons, and non-humans who are not?
Social psychology of morality: Substantial research has been done in both the neurophysiology and the biological and social foundations of morality. Work on this project needs to be well grounded in these aspects of human morality and behavior in order to understand how artificial systems can be integrated into society.
Jonathan Haidt’s Social Intuitionist model and Moral Foundations theory, if valid and accurate, may provide a valuable grounding in understanding the human morality that we are attempting to integrate autonomous systems into. On the other hand, Kurt Gray’s critique of the specific foundations and Haidt’s work, as well as his own theories regarding our formation of a theory of mind and the role of our perception of membership in the “Mind Club”, provide alternative clues in how to integrate personified systems into people’s moral and social interactions.
Philosophy and Ethics
The next step, based upon the roles and capabilities of the systems in the area of focus, and upon the expectations, needs and desire of the users, is to decide upon a suitable model of normative ethics, and then to flesh it out. There are three major classes of normative ethics: deontological, that is to say rule-based, consequentialist, focusing on the results and impact of actions, and virtue-based, focusing on the character of the actor.
I have suggested that given the demands of understanding all of the deontological rules that might apply to a given action, and of predicting all of the consequences of a given action, a virtue-based system is most suitable for autonomous systems, at least until they are highly sophisticated artificial general intelligences fully capable of being autonomous moral agents (AMAs), and perhaps even once they have attained that level. This, however, is by no means certain. There are researchers such as Selmer Bringsjord and his colleagues at the Rensselaer AI & Reasoning Lab, who have done considerable work in developing and using what they describe as a “Deontic Cognitive Event Calculus” system. The possibilities of such a system should not be dismissed without a rigorous examination and analysis.
Having chosen one of these three major paradigms, a more detailed system of rules, or principles will need to be developed. Again, the area of focus will have a significant impact upon which specific virtues, rules or principles are chosen, and the priority relationships between them, but I expect that there is a great overlap between the requirements of different areas.
For virtually all of the focus areas, there are existing human professions with their own rules of conduct, ethical standards and requisite virtues. Designing a specific normative system will call upon those bodies of work, along with general ethical considerations and the peculiarities of establishing a normative system for a non-human, and in terms of both cognitive and ethical realms, sub-human actors. Even when truly autonomous moral agents emerge and are generally recognized it seems likely that their natures will still be different enough that there will be difference between normative system controlling their behavior and that of humans.
One area of study that will need to be addressed is the impact upon us as humans of dealing with sub-human actors, and the normative systems that apply to both them and us. We are barely getting to the point where our social systems no longer recognize classes of supposedly inferior humans, and we have not gotten very far in considering the ethical roles of animals. As machines begin to be personified, as we begin to speak to them, and even converse, the impact upon our own consciences and behavior of dealing and interacting with non-persons or semi-persons with limited or different ethical roles will need to be monitored.
Having chosen a particular normative ethical system and set of principles, rules or virtues, the set will need to be operationalized, and the priorities and relationships between them will need to be clearly defined. As existing systems fall far short from the analytic capabilities and judgements required to apply the rules and principles to specific actions, the bulk of that analysis will have to be done by researchers and developers and turned into specifications and descriptions of the required behavior of the systems. This is a major undertaking.
Once the rules have been operationalized, they will have to be embodied in the design and implementation of the actual software systems involved, both the analytic systems (most of which reside in the cloud today) and the more active agents and applications which reside in mobile, desktop and IoT devices. Since the handling and exfiltration of sensitive information is a major issue in the trustworthiness of these systems, special care will have to be taken in the design of the distributed aspects of the system so as to control the domains to which various pieces of information are exposed.
[Continued in next installment]