Will the history books of year 3000 paint Artificial Intelligence (AI) as the catalyst that propelled societies down a path toward moral and social good? Or, will it be credited with leading humanity down a path of ethical destruction? Though the former is ideal for our children’s sake, what precautions are we implementing to assure the latter does not happen?
The Aspen Institute’s Communications and Society Program recently hosted its third Roundtable on Artificial Intelligence to discuss the topic, “Developing Goals and Metrics for the Good Society”. The roundtable included passionate thought-leaders from a variety of disciplines and perspectives including academia, government, law, tech and the non-profit sector. I was granted the fortunate opportunity to attend and participate in this roundtable as the 2019 Guest Scholar.
Our goal at the three-day event was to dive into the current trajectory of AI, the goals for AI which promote societal values, the signposts which provide insight about progress toward these goals and the metrics needed to better monitor AI over time. There were sessions designated to steer the conversations towards specific topics, as well as several intermissions to allow for the continued sharing of ideas. As I reflect on the various dialogues, I am left with a broader perspective on AI in the world and many more questions than answers.
“We need to frame these issues relevant for today and not 30 years from now.”
We began our discussion with the current trajectory and long-term goals for AI focusing on the values of liberty, efficiency, equality and community. AI has shown great potential to solve problems and answer longstanding questions in disciplines such as neuroscience, education, healthcare, transportation, etc. However, the same technology that could potentially solve these grand problems could also create new and potentially more dangerous problems of their own. For example, sophisticated healthcare AIs could also be used for bioterrorism. AIs that are subject to biased human data have the potential to further intensify disparities in the world. AIs that are built to do more complex tasks may begin to take more jobs; thus, reducing the need for humans to work and possibly redefining human existence entirely. As new technology is created, it is important to assess the effects of its use in a timely and comprehensive manner. Yet, current regulation for AI is limited to a law-making model that cannot keep up with its exponential growth. These were brought up as a few of the immediate concerns impeding the trajectory of AI toward a better society.
“Power is in the hands of the experts. We should be democratizing expertise.”
Moving from current trajectories and goals, we then broke out into groups to discuss the signposts that we should look out for in the areas of employment, healthcare and governance. As my personal background is in computer science and developing smart rehabilitative systems for children with motor and cognitive disabilities, I joined the healthcare group. We had a very rich discussion about the current applications of AI in healthcare such as personalized medicine, diagnostic systems, homecare robots, etc. Some positive signposts that AI is being used for social good might be an overall healthier population, novel discoveries in the medical field, increased preventative care, affordable early diagnostic systems and explainable causal reasoning models. Some negative signposts might be surgical robots making mistakes, the increased possibility of designer offspring, statistical models producing incorrect diagnosis without human oversight or an increase in the centralization of medical expertise. We pondered a world where AI could eradicate disease, nearly eliminate the need for hospitals and provide a better quality of life for persons with disabilities and ailments. We also considered the problems that could come from these solutions such as overpopulation, climate change, food shortages and insurance manipulation, to list a few. Which problems in healthcare are most pressing? Is there an ethical line where AI has gone too far? Who decides where that line is? These are all questions that we considered during the breakout sessions.
“Are we taking the paths toward good or the avoidance of bad?”
We wrapped up our discussion with the metrics needed to monitor the use of AI and the governmental/industrial/civil structures necessary to oversee such metrics. Some current metrics include sentiment analyses and public opinion surveys, employment numbers for jobs in AI, academic publication statistics, media stories, etc. However, a central set of metrics do not appear to be systematically mapped out. With AI encompassing so many complex tasks, should there be a central set of metrics? When considering regulation, how will we know if we have regulated enough? Too much? The metrics we consider for AI are greatly influenced by the narratives surrounding AI over time. Should we be placing more emphasis on developing metrics for the media and the narratives that are portrayed? These are important questions that must be answered when considering new metrics for AI in society.
“With a technological development race, slow governance can be a threat to national security.”
Even with metrics to monitor AI in society, who is responsible for it? Should we implement a certification model for specific technologies (like the food service standards) so that users have the power to decide? Should there be an independent organization for oversight? Would a review board work? Can the current governmental structure adapt to meet the challenges of the time coefficient without obstructing technological advancement? The roundtable gave several pros and cons to a variety of approaches that will certainly work toward pushing this necessary conversation towards action.
I greatly appreciate the Aspen Institute for providing the space and opportunity for multidisciplinary voices to address this great technological revolution happening in the world. It was an honor to meet such amazing leaders in their respective fields and to participate in such important dialogues. Since the workshop, I have been critically examining my work in AI and the impact that I wish to have on the field. I pose the following again. Will the history books of year 3000 paint AI as the catalyst that propelled societies down a path toward moral and social good? I would argue that it is too early to provide a definitive answer, but not too late to set the course. Using AI to move toward the good society will require continued multidisciplinary governance efforts, greater access to necessary educational courses (AI, computer science, statistics, etc.) and improved cultural narratives surrounding the field of AI. Most of all, society must make an investment in the primary goal: to use AI for the ethical and moral betterment of the human existence.
De’Aira Bryant is the 2019 Aspen Institute Roundtable on Artificial Intelligence Guest Scholar. The Communications and Society Program sponsors the guest scholarship initiative to give students of color the opportunity to foster their professional and academic career in the field of media and technology policy.
Bryant is currently a doctoral student in the School of Interactive Computing at Georgia Institute of Technology.
The opinions expressed in this piece are those of the author and may not necessarily represent the view of the Aspen Institute.