Technology

Can Censoring Social Media Stop Extremism?

June 25, 2017  • Rebecca MacKinnon

Rebecca MacKinnon directs the Ranking Digital Rights project at New America. She co-founded the citizen media network Global Voices and authored Consent of the Networked: The Worldwide Struggle for Internet Freedom. She will speak at the Reimagining the Internet track at the Aspen Ideas Festival. Below is her piece on the regulation of speech on social media platforms.

Would recent terror attacks in the UK, France and elsewhere have happened if social media didn’t exist? Would they have been nipped in the bud if Facebook and YouTube had done a better job of policing their users’ activities and reporting to authorities?

Earlier this month UK prime minister Theresa May slammed the American social media giants for allegedly creating the “safe space” that extremism needs “to breed.” She and France’s new president Emmanuel Macron have joined forces in threatening to crack down on social media companies – imposing fines and greater legal liability – if they “fail to remove unacceptable content.”

The internet is by no means the root cause of terrorism, though it has certainly proven to be an amplifier and conveyor belt for hatred and violence. We should expect companies to work with law enforcement, user communities, and human rights groups to prevent their platforms from empowering and spreading violence and hatred. But there is no quick technical fix that won’t also undermine those who are trying to fight extremism and hate – along with many other people who accidentally get silenced by companies’ efforts to police dangerous speech. Fighting terror – online as well as offline – requires thoughtful and responsible leadership by government and companies alike if democracy is to survive, and human rights as well as public safety are to be protected.

In response to regulatory threats of fines and other legal penalties not only from the UK and France but also Germany and elsewhere across the European Union, Facebook and Google recently published blog posts and articles to explain how they are beefing up efforts to keep terrorist activity off their platforms. Facebook is deploying new artificial intelligence capabilities to identify terrorist speech (as opposed to other speech that mentions terrorism as a thing in the news, etc). They are also hiring more people – including counter-intelligence experts – and consulting with community groups to help identify and shut down the real thing. Google similarly is beefing up engineering resources the division of Alphabet will fund non-profit groups who can help them identify terrorist content and counter it with alternative, anti-terrorist messages in hopes that people targeted by ISIS recruiting will be persuaded to change their minds.

The internet is by no means the root cause of terrorism, though it has certainly proven to be an amplifier and conveyor belt for hatred and violence.

But as these companies move quickly in response to pressure from governments to deal with online extremism, activists are suffering “collateral damage”. For example: In a post on Medium last month titled “Mark Zuckerberg Hates Black People,” Black Lives Matter activist Didi Delgado wrote about how, across her Facebook network, the company was banning accounts of activists who speak out against racism by posting screenshots of racist and hateful messages they have received in an effort to “out” the racists.

The problem is global. In the South East Asian country of Myanmar which is struggling to emerge from a legacy of military dictatorship, human rights groups have raised the alarm about the use of Facebook by Burmese religious fundamentalists to foment ethnic hatred and violent attacks against the country’s Muslim minority group. Facebook’s response was to ban a derogatory Burmese language word used by ultra-nationalist and religious fundamentalists to describe Muslims. But as Global Voices Advocacy recently reported, the ban has also resulted in censorship of people trying to speak out against racist behavior.

YouTube’s systems that police video content and block ads from running alongside offensive content are also ham-fisted. A sports fan who uploaded a clip of a giant American flag being unfurled across a baseball field on the 10th anniversary of 9/11 was presented with a message that the video was not “advertiser friendly.” According to a Google support document such a label can be triggered if a video includes “sexually suggestive content,” “violence,” “inappropriate language,” “promotion of drugs and regulated substances,” and “controversial or sensitive subjects and events. Advertising that is normally paired with videos on YouTube is therefore not be shown with the video, preventing the account holder from earning any advertising revenue on it. It was only after journalist Rob Pegoraro inquired with the company that they admitted they had made a mistake and rectified it – albeit without any clear explanation.

These are just three examples of how difficult it is for companies like Facebook and Google stamp out violent and extremist speech without unintentionally violating many other people’s rights. Whether that matters to politicians now threatening regulation is unclear. But in the long run companies must find a way to reduce the collateral damage to free speech rights of activists, journalists, hobbyists, or random people having edgy conversations about controversial and upsetting social issues, while also reducing the use of their platforms to promote and perpetrate violent extremism. If they are viewed by a growing number of users as arbitrary and unfair in their policing mechanisms, people will increasingly take their online activities elsewhere. Meanwhile, there is scant evidence that social media crackdowns will actually prevent terror attacks from happening.

Leaders of democratic nations such as May in the UK have also cited recent terror attacks as a reason why companies should not be allowed to offer encrypted communications tools that make it impossible for anybody to eavesdrop on a user’s conversations. Yet the perpetrators of the recent Manchester and London attacks were previously known to authorities who failed to connect the dots between pieces of data already at their disposal. The existence of secure communication channels did not make the attackers invisible. May’s government is pushing to require companies to build deliberate weaknesses into their encryption technology so authorities can access conversation on request. Such ideas have support among some members of the U.S. Congress as well. But without strong encryption, journalists, activists, and vulnerable religious minorities around the world will be even more vulnerable to attack than they already are and we will be no safer from terrorists. Moreover, the weakened security for financial transactions, medical records and other public services will make them more vulnerable to cybercrime and attacks such as the one that paralyzed much of the UK’s national health service last month. As citizens we must insist that our government leaders commit to find ways to keep us safe from terrorists while not stripping us of our rights and making people physically and economically more vulnerable in other ways.

The question is: in the internet age when power is abused and people’s rights are violated, how do we make sure that the abusers and violators are held accountable?

That is not to say that companies shouldn’t be held accountable for their business models, design and engineering choices, and policies affecting people’s online activities, safety, and security. Corporate actions and decisions have clear implications for the future of human society: economic, social, and political. It is now widely accepted across the world that companies should be held accountable for their impact on global climate change, human rights of workers they employ and people in communities where they operate. As investors, consumers and voters we should also expect that when it comes to the communications technologies everybody depends on, companies must contribute positively to the future of democracy and human rights rather than corrode the freedoms and rights we want our children and grandchildren to enjoy.

This responsibility should include preventing governments from using their platforms as a theatre for information warfare against their political and geopolitical enemies. After an initial stage of denial in the aftermath of the US presidential election, Facebook and Google have started to work with a range of journalists, academic researchers and other expert groups to expose and debunk “fake news” on their platforms. But serious concerns remain. A recent 12-nation study by the Oxford Internet Institute found that “Social media are actively used as a tool for public opinion manipulation” and conclude that these companies “need to significantly redesign themselves if democracy is going to survive social media.”

Social media are being used by a range of people and organizations to further various political and financial objectives. The question is: in the internet age when power is abused and people’s rights are violated, how do we make sure that the abusers and violators are held accountable? Or that we even know who committed the abuse of power and how? How do we design and manage technologies in a manner that enhances democracy and human rights – rather than corrode them? Nobody has good answers. Companies would take a step in the right direction if they would be much more transparent about how they police and manage content on their platforms, as well as how they are using and sharing information that can be used to track and profile people. Research on company disclosures conducted by Ranking Digital Rights, a project that I lead at New America, has found corporate disclosure about such practices to be inadequate.

Improving the governance and accountability of technology platforms is necessary but not sufficient, however. It will not eliminate the real source of violence, hate and abuse of power. The fact is that governance, politics, and economics across the world are broken: they are not serving most of the world’s people well at all. Nor are our existing political and legal institutions – even in democracies – capable of addressing the security and human rights challenges that have emerged across globally interconnected digital networks.

Right now the world seems to be in a negative feedback loop: dictators, demagogues, and violent extremists are using digital tools and platforms to achieve their goals. But don’t forget the many ways that journalists, activists and members of many persecuted groups also depend on social media platforms every day to organize and advocate for social justice. Citizens fighting for their rights in disadvantaged or persecuted communities across the world must not be deprived of their most effective tool for organizing and exposing lies and injustice.

Our political, legal, and economic institutions are all in desperate need of upgrading so that they can better tackle the problems posed not only by social media but by artificial intelligence and the “internet of things.” Such an upgrade cannot be achieved without innovative use of technology that can help us make governments – as well as companies that hold tremendous power over the public sphere – more transparent and accountable.

That in turn will only be possible if the technologies themselves are designed and managed in a way that respects human rights while also enabling people to hold accountable those who use technology to abuse power and commit crime. To get there, we are going to need principled, innovative leadership not only from government officials and tech company executives, but from all corners of society.

The views and opinions of the author are her own and do not necessarily reflect those of the Aspen Institute.

Related
Health Care
Taking Care of Aging Populations
June 23, 2017 • Katie Hafner