Technology

Human-Machine Relations: Reflections on the intersection of Human Intimacy and Artificial Intelligence

April 13, 2020  • Mehtab Khan

Do you have a name for your Roomba? Do you have conversations with Alexa? These questions are now part of normal discourse and makes us wonder about the growing ubiquity of AI systems that resemble and respond to human emotions.

The Aspen Digital Program, formerly Communications and Society, hosted its Roundtable on Artificial Intelligence in January 2020 to discuss the intersection of human intimacy and Artificial Intelligence. The roundtable brought together lawyers, technologists, psychologists, doctors, philosophers, AI/ML engineers and academics to discuss the complexities of humans interacting with AI. I was honored to be selected as the 2020 Guest Scholar to attend and participate in this year’s roundtable.

The discussion began by us talking about human characteristics and traits that AI cannot replicate. The conversations forced us to think about distinctly human responses and desires in relationships. We talked about AI-human interactions and their evolution over the past few decades. We dispelled some notions about what an AI system has to look like for a human to respond to it with emotion. There is an assumption that increasingly human-looking or “humanoid” AI systems are what people would respond to. However, this is not true. Humans have the capacity to anthropomorphize any object. Some people have named their Roombas – a robotic vacuum – and seek an exact replica of the Roomba when sent off for repair.

The star of the Roundtable discussion was a robot baby seal called PARO, that is used in medical settings to provide emotional support for a wide range of patients. In some ways, you can interact with PARO like you would with a pet. PARO is built with a number of responses like its eyes widening at your sight and making cooing sounds. PARO serves a unique function in the healthcare industry. However, what we realized at the Roundtable was that human beings are also capable of creating an experience out of an interaction with an AI system that is unique to their needs and desires. Not everyone has to be prescribed an interaction with PARO to appreciate its utility.

There was discussion about the opportunities afforded by an emotional relationship with an AI system. PARO serves a need in the healthcare industry. Voice assistants like Alexa and Siri help with a wide range of tasks. Participants talked about how children like having conversations with voice assistants, like Siri. However, the use of voice assistants also raises concerns about personal data privacy, who owns this data and to what extent can it be used by third-parties.

Increasingly, the ethical and psychological implications of  this artificially empathetic technology are becoming important design questions. One point that was emphasized in our discussions about design and development of technologies was that decisions should not be based only on anthropomorphism – human traits, emotions, or intentions. To some extent, we will have to grapple with questions like what gender the AI system should have. Will it have a face? A backstory? The discussion of trust was central to the question of anthropomorphism. Does being human-like necessarily mean we should and can trust a system? Does it merely give us a false sense of comfort? Will human-like features allow more decision-making power over humans?

The overarching questions remain open, such as how inventions like PARO and Siri shape human emotions and experiences. What do they replace and serve? What would the future of such relationships look like? What opportunities and costs does an “emotional” human-machine relationship afford?

We also explored issues with building systems through which biases would be replicated. We discussed scenarios in which biased policies might be designed into a system, and then serve to reproduce and entrench inequalities. It is important to note these concerns in the design and use of AI systems because they should not become automated versions of human overlords.

We agreed on the idea that we are not looking to replace human interactions but to supplement, enhance, and build on them. We need to make choices about what kinds of experiences we are looking to encompass in designing more interactive and embedded AI systems. Designing them will always be a challenge, but there should be a degree of distance from humanness. We need to be able to steer the conversation back to utility, explainability and the ultimate goals of creating an intelligent system. The conclusory consensus was that although there are many differences between human characteristics and AI capabilities, our aim is not to compare  “intelligence.” Instead, our aim is to develop the potential of AI to supplement human society.

Reflecting on the Roundtable, I left with a more nuanced perspective on emotion and AI, and the range of factors that should go into considering the future of AI-Human interactions. My background is in law, and this was an important opportunity to explore multidisciplinary perspectives and think beyond narratives of regulation. I am grateful to the Aspen Institute for providing me an opportunity to engage with a diverse range of voices and disciplines. I was honored to meet thought leaders and senior professionals who are working on critical issues in their fields.

An important takeaway for me is that we need to keep having conversations and look for spaces to collaborate, such as the one created by the Aspen Digital Program. We also have to learn to negotiate different disciplines and terrains, such as psychology, engineering, humanities, philosophy, and law. It is in our ability to keep up with the constantly changing definitions, contours and issues that we will be able to face emerging technological challenges.

Mehtab Khan is the 2020 Aspen Institute Roundtable on Artificial Intelligence Firestone Fellow (previously Aspen Guest Scholarship). The Aspen Digital Program sponsors the Firestone Fellowship initiative to give students of color the opportunity to foster their professional and academic career in the field of media and technology policy.

Khan is currently a doctoral candidate at the University of California, Berkeley, School of Law.

The opinions expressed in this piece are those of the author and may not necessarily represent the view of the Aspen Institute.