Civic Action

To Fix a Broken Internet, Coders Revisit Their Code of Ethics

July 24, 2017  • Louise Lief

Add one more group to the growing list of those jolted into self-examination and action in the wake of the 2016 election. There are the women, scientists, the media, religious leaders — and now the coders.

In a world turned upside down, there are signs of a cultural shift in the tech industry to take greater responsibility for the great digital systems it builds and runs — and upon which much of the world now depends. Until now, they have been in what Tarleton Gillespie, a researcher at Microsoft Research New England calls a “regulatory sweet spot” between “legislative protections that benefit them and obligations that do not.”

Since the elections, the question of their broader responsibilities in public discourse and civic life is moving front and center. Concerned about fake news, disinformation and hate speech that are often amplified by automated online systems, the top global association of computer scientists, engineers and developers is re-examining its code of ethics and professional conduct, and issuing statements endorsing transparency and accountability for the algorithms that power these systems. Other computer scientists, aided by the American Civil Liberties Union, are suing the federal government for the ability to audit algorithms. Still others, alarmed by what they call the growing “creepiness” of digital tools, are critiquing current tech practices and searching for a better way.

I began this inquiry curious to see what the computing community thinks about the events of the past year. Do they feel, as so many people do, that the architecture of the social media platforms they’ve created has distorted and harmed public discourse? I also wanted to know, in an age where machines “learn” and tech leaders promise that artificial intelligence will solve many of the problems we now experience, where moral judgments enter the picture (if anywhere). In short, what are the ethical responsibilities of the people who build these systems?

Investigating these questions, I discovered a surge of activity in this community that began just after the November 2016 election. In December, the Association for Computing Machinery (ACM), the world’s largest society of computer scientists, software engineers, software architects and developers, called on its 100,000 global members to revise its code of ethics and professional conduct. It had not been updated in 25 years.

In January 2017, ACM’s US Public Policy Council  issued seven principles for algorithmic transparency and accountability to prevent potential flaws and biases in their design, implementation and use that can cause harm.

In February, Facebook founder and CEO Mark Zuckerberg published a lengthy reflection called “Building a Global Community.” Many saw it as his response to widespread criticism of Facebook during the election campaign. Some hailed it as a shift from his previous thinking that Facebook is a neutral information pipeline to an acknowledgement that it helps shape civic discourse. He posed what he called “the most important question” of all: “Are we building the world we want?”

Increasingly, the answer to that question from his computing community is no. “I think the Internet is broken,” Twitter founder Evan Williams told the New York Times recently. “And it’s a lot more obvious to a lot of people that it’s broken.”

But if that is the case, how do you fix it?


Zuckerberg’s letter to his troops provides a revealing glimpse into the way many  computer scientists and engineers think about the world. Digital anthropologist Tom Boellstorff of the University of California, Irvine analyzed the clean, linear thinking that informs it. To Zuckerberg, he writes, “community is connection, pure and simple.” Political conflict is a technological bug that can be fixed. Scale is a preeminent virtue.

It’s a mindset common in this community. “I worry a little bit that the people deploying these systems are computer scientists trying to solve the world’s problems through computer science,” said MIT Media Lab director Joi Ito in introducing a new interdisciplinary Artificial Intelligence governance initiative. He has noted that computer scientists rarely engage with other disciplines like law and the social sciences about the impact of the systems they are designing. He’d like them to have a conversation “before it’s too late” and before AI becomes further embedded in our lives without forethought of the consequences.

As the ACM works to update its code of ethics by 2018, the organization is also struggling with these questions. It has not been revised since 1992 — an astonishing span considering the building blocks of the World Wide Web weren’t even released to the public until 1993. With the exception of a more limited software engineering code written in 1999, ethics guidelines in the computing industry have not kept up as the world has dashed from one breakthrough to another, revolutionizing information systems and changing the globe. I asked ACM ethics committee chairman Don Gotterbarn, who also helped write the original 1992 ACM ethics code, why the update took so long. After a long pause, he said slowly, “I don’t have an answer to that question.”

Now in its second draft and open to the public for comments, it’s clear this new ACM code is for a different kind of crowd. The drafters feel obliged to state that “the Code is not an algorithm for solving ethical problems.”

It’s a striking contrast to Zuckerberg’s post. It says “the public good” should be the paramount consideration in decision-making. It outlines moral principles and guidance for professional responsibilities, acknowledging the broad impact computing has on society. It calls on this community to make greater efforts to anticipate the negative impacts of the systems they design before they deploy them, and to accept personal responsibility and accountability for their work. It says if these systems cause harm — even unintentionally — the designers have a duty to act to undo or mitigate it.

There is a new section addressing sexual harassment. The privacy section urges computing professionals to establish procedures to allow individuals to review their personal data, correct inaccuracies and opt out of automatic data collection.

Gotterbarn calls it an “aspirational document,” with “pointers to difficulties to be avoided.” But if the 2018 code keeps these changes, they will become new standards for the behavior of computing professionals. Beyond the commitment of ACM members to observe the code, these standards could be cited in court, as the 1992 code has been. And it addresses complex systems that hadn’t been invented when the older code was written.

Among these are the algorithmic information systems that power social media. To many of us algorithms are a black box, something we suffered through in high school math class. Now, they affect almost every aspect of daily life. In addition to shaping Facebook’s Newsfeed, Google search results, Twitter’s timeline and trending hashtags, and much of online ad distribution, they increasingly guide decision making in policing and criminal justice, employment, health, education, commerce, finance, defense and other critical fields. Calculations that process and analyze information, algorithms produce results according to the recipes they’re given. They also learn and change by analyzing the data they collect and feeding it back into the system, generating new algorithms. Their results seem “logical” and shape our choices.


Another ACM committee, its US Policy Council (there are also policy councils in Europe, India, and China) has identified potential distortions and biases in algorithmic approaches that can produce errors. In January 2017 the council waded into this thorny area and issued a statement on algorithmic transparency and accountability, emphasizing the need to detect and address harm algorithms may cause. The council also stated that “institutions should be held responsible for decisions made by the algorithms they use.”

Algorithms can go awry in various ways, and the news media is struggling to cover this strange, complex new beat. Amazon Prime Free Same-Day Delivery algorithms unintentionally excluded minority urban neighborhoods. A news investigation found that algorithmic risk assessment tools used by the Broward County, Florida criminal justice system to judge the likelihood of repeat offenders was biased.

Even software engineers suffer from algorithms’ thoughtlessness. Web designer and developer Eric Meyer described experiencing Facebook’s “algorithmic cruelty.” His young daughter Rebecca died in 2014 on her sixth birthday. It was a wretched year, and he decided not to build a year-in-review tool for his Facebook posts. Without asking, Facebook chose to do it for him. On Christmas Eve, it posted a picture of his dead daughter surrounded by balloons and dancers with the title, “Eric, here’s what your year looked like!” Her photo kept popping up with different, fun backgrounds, he wrote, “as if celebrating a death.”

This particular Facebook algorithm was built for “happy users.” In a blog post Meyer listed design features that would have helped avoid this painful experience. In the comments section of his post, others recounted similar experiences. “Emily” voiced resentment at her unequal marriage with Facebook and fretted about leaving, a dilemma many users face. “What makes it difficult is that Facebook is often a source of community and support during trying times,” she wrote. “So it doesn’t make sense to say, ‘just quit if you don’t like it.’ Facebook is hugging you with one arm and punching you with the other.”

The developers who commented speculated that this Facebook product team had rushed to out-innovate competing groups without thinking through worst-case scenarios, and so built a skewed algorithm.

This is another characteristic of Silicon Valley. Its culture has fostered an almost single-minded focus on speed, scale and growth. It’s difficult to shoehorn ethical considerations into this hyper-speed production cycle. “The driver is getting it out in the market and letting the market fix it,” says Lisa Welchman, a consultant on digital governance. “Most of the time, [ethics] is not what the person writing the ‘killer app’ is thinking about or what the business side is thinking about.” She contrasts this process with the automotive industry, where safety is a value considered in almost every step of the development process.

Tarleton Gillespie, principal researcher at Microsoft Research New England, studies how algorithmic information systems shape public knowledge and discourse. He’s completing a book on our decade-long experiment with social media and how to think about it going forward.

He notices the public beginning to sour on social media platforms even as they become more embedded in our lives. He believes the elections tipped the scales. “It’s one thing to say Facebook is running something, another to say there’s a flurry of information and discourse, and I can’t track the problem or see who’s pulling the levers.”

He thinks it would help to abandon the myth that algorithms are neutral. In reality, they privilege certain activities and make others difficult, weigh some things heavily, and others lightly. “You are instituting a value system, in a way,” says Gillespie, “while insisting you aren’t.”

Since advertisers value personalization and customization, platforms are built to emphasize those attributes rather than a diversity of ideas, says Jessa Lingel, a professor at the University of Pennsylvania’s Annenberg School of Communication. “[The platforms] sell customization as a feature rather than as a limitation.”

In its early days, Internet searches showed more viewpoints, she says. The scientists who built the Internet were deeply committed to making the world more democratic, tolerant, and diverse. Today, all too often extreme voices and viewpoints often overwhelm more diverse and tolerant ones, driven by automation, virality and scale. Customization has exacerbated the problem, and public discourse on these networks has become more siloed. “We were promised a democratic web,” says Lingel. “We got a populist one.”

Addressing many of these issues runs up against a central difficulty. Private companies control the platforms that shape the public sphere. The ACM’s call for algorithmic transparency and accountability is a tough sell for many of these private companies; any plan to implement transparency would face legal barriers from companies’ Terms of Service, which consider their algorithms and the data they generate proprietary.

The American Civil Liberties Union wants to change that. Last June it filed a lawsuit on behalf of computer science researchers at the University of Michigan, University of Illinois, Northeastern University and First Look Media to challenge the constitutionality of the Computer Fraud and Abuse Act, a decades-old federal anti-hacking law they say is overly broad. The plaintiffs want to develop ways to independently test and audit algorithms to identify instances of bias, discrimination and illegal practices. The current law threatens civil and criminal penalties for this kind of work. Without such investigations, the researchers write, “We have no idea how [algorithms] accomplish what they do.”

In some cases, even the engineers who design these systems don’t understand how they work. Unlike the scientific world, where scientists are expected to describe how they arrived at their results and other scientists try to replicate them, in the tech world it can be enough to get the algorithm to do what you want it to do. “When I talk to people who design these systems, they don’t always know why the system delivered information the way it did,” says Gillespie. “They don’t seem troubled by it.”

The new draft ACM code seeks to establish a “shared moral bedrock” for all the computing professions,” says ethics committee member Bo Brinkman, a computer science professor at the University of Miami, Ohio. He studies the social and ethical implications of augmented reality, another area not covered by the existing code.


Shifting the discussion from what the big tech platforms will or won’t do to identifying measures that will advance the “public good” opens up new vistas and suggests new tools. The draft code offers a framework for thinking through ways to make social media and other tech platforms safer and more hospitable. It focuses on the computing community’s impact on society, the need for them to accept responsibility for this impact, and their duty to act to prevent harm.

But in the computer science world, there is still resistance to the idea of responsibility for the public impact of the systems they build. Responding to a section in the draft code on the need to help the public better understand the impacts of computing, several people objected in the comments. “Quite frankly I don’t think computing professionals need to serve as the help desk of the public,” wrote one. “We get enough of that from our families.”

Beyond the ACM’s efforts, unease in other quarters of the computer science world is increasing. In May, MIT computer science graduate and founder of the consulting firm Creative Good Mark Hurst launched the platform Skeptech to address what he sees as the tech industry’s growing systemic problems and “creepiness.” In the late 1990’s, he wrote, he was “naively optimistic about the Internet’s potential to improve the world.” Now he talks about addictive tools and surveillance systems that track users’ private communications and actions without their knowledge or consent. He says the values he cherishes seem “increasingly out of step” with the new tech reality. He still believes digital technologies can benefit the public, but he thinks the industry has gone astray.

Many professionals in this world are looking for solutions. Gillespie and others have explored the idea of “lenses” that filter social media platforms, making it easier to share things like collective block lists to exclude harassers. But social media business models depend on maximizing shares and clicks. Tech companies would need to make some economic tradeoffs. Lingel wants to see the platforms build more diversity into their architecture, support apps that promote constructive civic dialogue, and make their community guidelines more wiki-like and democratic, a place where developers and users can discuss and agree on shared standards.

Software engineer and consultant Indi Young has developed a slower deliberative cycle that operates alongside the rapid development cycle in order to better accommodate ethics discussions.

Long considered an afterthought, universities are beginning to up their game on ethics training in computer science. In May Princeton University, whose graduates include Amazon founder Jeff Bezos and Eric Schmidt of Google’s parent company Alphabet, held a conference to consider ethics in computer science research. Participants asked questions like,  “Could my new face detector be misused for racial profiling?” and “Is my web crawler accidentally scooping up sensitive information about people?”

It’s great they’re asking these questions. Here’s one more: Judging by the new ethics standards proposed by their own community, how do tech leaders measure up?

Louise Lief has been Scholar-in-Residence at the American University School of Communication Investigative Reporting Workshop, and was a public policy scholar at the Wilson Center.