Technology

How to Use Ethics to Build Trust in Artificial Intelligence

October 21, 2020

Currently, one of the most popular offerings on Netflix is The Social Dilemma, a documentary about how social media companies exploit our behavioral weaknesses to keep us online. This is quite ironic as Netflix is precisely engineered to get us to click on and watch the ‘next episode’.

There are many questions we should ask. Is Netflix’s algorithm optimizing for my mental health or my screen time? Do ‘recommended shows’ match my own judgment of what makes me laugh and cry? Will documentaries push me into an ideological rabbit hole and make me prone to radicalization? Are these predictions based on a representative sample of people like me? And if Netflix reverse-engineers that I am a white man who likes European romcoms, is such inference reasonable?

These questions point in a similar direction: how can I trust tech companies like Netflix to enforce ethical standards in artificial intelligence?

I was fortunate to be part of a series of such discussions with young leaders in Asia, Europe, the US, and Latin America, organized by the Aspen Institute and Microsoft. Our thoughts and conclusions were synthesized in a report, ‘How AI can work for humanity.’ The report advances the idea of ‘labeling’ digital products to show their conformity to a set standard of AI ethics. We could imagine URLs looking like httpsai//www.netflix.com, with ‘ai’ standing for AI-ethics-compliance. Labels are appealing, yet sometimes they do not work. Sometimes, people just need more evidence. Sometimes, people avoid labels consciously or unconsciously. And sometimes, after they see a label, some people suspect the website is trying to trick them (known as the backfire effect).

Standards and labels may not be enough for the public to believe AI ethics standards are well implemented.

First, public opinion will likely remain foggy on AI. When Deepmind’s AlphaGo defeated Lee Sedol in 2016, it sparked a global discussion on AI ethics. In 4 years this translated into a clear consensus, now moving towards implementation and, if needed, legislation. Nonetheless, the general public remains somewhere between ignorance and suspicion, with mixed support for AI at best. The AI ethics field moves much faster than the public who is still discovering this technology.

Second, irrational fears about AI and robots are harder to combat with rational responses such as standards. Indeed, transitioning to a new technological world naturally creates high expectations. Yet it also instills ambiguity and confusion which, as history demonstrates, engenders anxiety and fears of coercion, manipulation, and control. Such fears will need maturation, time, and education to be addressed.

Third, something is different when it comes to human trust in AI. It is different because trusting humans to enforce human-made decisions, like trusting an organic label on a yogurt, is not similar to trusting machines to follow human intentions. German philosopher Martin Heidegger theorized that the human-technology relationship has a nature of its own, and oscillates between suspicion and reliance until we accept the risk inherent to technology. What does it take to accept that risk for AI? It takes extra care. There is a little bit of our data and thus of ourselves in the AI machine, and we can perceive the “mimicking” exercise of algorithms as an invasion of privacy.

This discussion would be benign if distrust in AI and tech did not bear critical real-world implications. During the coronavirus crisis, contact-tracing apps offered an early preview of what happens when ethical standards aren’t trusted. In May, Google and Apple jointly launched a technology that could be used by governments to build contact tracing apps, asserting, “We believe that these strong privacy protections are also the best way to encourage use of these apps,” The system respected privacy more than many assumed thanks to Bluetooth-based technology. Still, privacy concerns were often mentioned as a reason why people failed to download contact tracing apps. People simply do not trust that tech ethics are implemented.

Such distrust of implementation seems to work, somehow, like fears of airplanes.

Imagine a passenger who never heard about airplanes now has to fly for the first time. At boarding time, she is legitimately afraid. To reassure her, flight attendants explain how planes work; that the plane has been checked by an expert independent authority; that the pilot is a good person; and that planes very rarely crash. The passenger still wonders whether to trust the attendants. An attendant takes the person to the pilot’s cabin. The pilot even allows the passenger to indicate her preferred pathway. Persuaded, the person boards the plane.

This analogy gives six solutions to increase trust in a new technology like AI after ethical standards are ready and implemented.

Increase transparency– explain how the plane works. Transparency on how algorithms work, what data comes in, and what objectives are sought after is key. For example, Microsoft developed a tool to visualize how a machine learning algorithm came about a certain solution.

Boost the regulator’s reputation– praise the authority which checked the plane. The big question here is: who should the regulator be? For AI, 46% of surveyed NextGen members believe government should be in charge; but less than 30% are fully confident in government expertise,  in line with the general public. If they are perceived as legitimate but not up to the task, what can governments do? First, they can prioritize trust in AI policy roadmaps. Second, they can proactively build their reputation for expertise (e.g. deploying AI in public administrations) and for efficiency by intervening when needed (e.g. asking for audits of algorithms). Fourth, they can innovate as regulators, as California which required automated online accounts to identify as robots.

Persuade people that tech actors will not cheat– praising the pilot’s moral compass. Distrust accumulated following scandal after scandal will not be easily reversed. Tech companies will have to avoid dropping the AI buzzword, answer critics with facts, and stick to proceduralism. Dismantling an ethics advisory board shows what not to do. 

Enable the public to visualize the absence of harm– show that plane crashes are extremely rare. On most common fears, indicators are actually positive. When asked about job losses due to automation, show that jobs are stable and employees are less tired.

Invite people to shape regulation– take the passenger to the pilot’s cabin. Technology policy is generally not very participatory. But with AI, we need people to feel agency in norm building because ethics is as much about reaching a just result as reaching a fair one. MIT’s moral machine global survey showed how to invite all voices. AI ethics touches on intimate notions of human rights, behaviors, and autonomy. A power transfer still needs to happen between tech companies and citizens. How can we move from here? It can start with a new narrative on AI ethics and policy that would present AI ethics not as a regulation to write, but as a collective problem to be debated. Then, it can continue by multiplying participatory processes, such as what the Aspen Institute began with the report, and what UNESCO did this summer by launching a global public consultation.

Make people more active in how they use AI– telling the passenger that they can choose the path the plane can take. Instead of using AI to power “magic” recommendation systems, as Netflix does today, we could imagine a world where AI only assists human judgment and does not replace it—a world of “information fiduciaries.” After all, in its ancient Greek reading, technology was there for when nature was missing. Maybe this is where we should restart. Where is nature missing?

And now, let’s watch that next episode.


View the launch of the Next Gen report below. You can access the NextGen Report here.

Henri Brebant is a member of the Aspen Institute NextGen Network and recently graduated from the Harvard Kennedy School of Government. 

Related
National Security
How to Protect Elections and the Digital Landscape
October 13, 2020 • Aspen Strategy Group