Media

Unbreaking News

October 1, 2018  • Joanne Lipman

Key Points

  • Fake news, bias, and manipulative technology have led Americans to doubt or outright distrust the media. It’s time to fix that.

The American journalistic ideal for more than a century has been unbiased, objective journalism that presents the facts accurately and fairly. Today’s crisis of trust in media is largely due to the perception—and often the reality—that news organizations representing themselves as fair are instead biased. It isn’t hard to understand why. Consumers are bombarded with different types of information, from multiple sources of varying reliability, all of which looks the same when served up in an internet search or on social media. The media, meanwhile, do a poor job of clearly distinguishing fact from opinion. Multiple news formats make ample use of opinion—including analysis, editorials, advocacy, and op-eds. Then there are straight-news reports that are perceived as biased either because of judgment calls or omissions. On one recent evening, for example, the lead story on CNN was about the Russia investigation, while the Fox News lead was about the strength of the economy. Both were factually accurate. But viewers of one would think they were living in a different world than viewers of the other.

Bias itself isn’t necessarily dangerous. In other Western countries, including the United Kingdom, news organizations are openly biased and consider it a virtue to have a point of view. Bias is also at the heart of advocacy journalism, which has toppled dictators and authoritarian regimes. A century ago in the United States, advocacy journalism revealed abuses in child labor and the meatpacking industry, which led to health and safety laws. The problem comes when bias isn’t acknowledged—especially when there is no transparency about the source of information, who is sharing it, or what their motivation is. Then, information can be weaponized.

Unfortunately, bad actors have become expert at weaponizing information—and are adept at abusing technological platforms to do it. It’s clear Russia staged a cyberattack to disrupt the 2016 elections. Recently, Director of National Intelligence Dan Coats sounded the alarm that “the warning lights are blinking red.” Also, there are homegrown political agitators as well as profiteers looking to make money on advertising for viral posts. Platforms like Facebook, Google, and Twitter are particularly vulnerable to cyberattacks and manipulation. Russians, and other bad actors, have focused their efforts on platforms that use both algorithms to surface and distribute information as well as social signals that can be manipulated with bots and “like farms,” or “click farms,” that artificially inflate post interactions. These platforms distribute damaging and false information more quickly and to a far wider audience than any individual news outlet.

Those with bad intentions make sophisticated use of “persuasive technology,” a set of tools used to create addictive behavior: alerts, likes, retweets, colorful graphics, Snapchat streaks. Persuasive technology rewards information that spreads quickly and garners attention. Virtually all digital players, including mainstream media, use some form of it to change behavior and entice repeat usage.

But in a dangerous twist, MIT researchers recently found that lies and fake news spread more quickly on Twitter than true news. Another study suggested that angry posts spread more quickly than joyful ones. Bad actors can take advantage of that behavior to plant fake posts that sow discontent, hatred, and divisiveness.

How to address this volatile state? The answers are many, but here are four that build on the ideas above:

A new incentive system. Currently, the business models of Facebook, Twitter, Google, and other platforms are built on viral content—whatever is shared the most ends up earning the most money for both the tech firms and the content creators. That in turn encourages manipulative and false content, because it spreads the fastest. The nation’s brightest minds need to harness persuasive technology to reinvent that incentive system. How do we reward good content? What techniques will encourage users to share it?

Clear labeling of fact versus opinion. Can any news consumer name the differences among “analysis,” “perspective,” and “opinion”? If they can’t—and it’s a fair bet even most professional journalists can’t—the labels are useless. Media need fewer labels, paired with more prominent demarcations of what is fact and what is opinion. Journalists have a role to play here, as do their editors. Reporters who speak publicly must strive to stick to the facts of what they have reported firsthand, rather than opining on events.

Acknowledgment of bias. Media and tech firms need to agree on a definition of bias. This alone is complicated, made more so by the stigma of admitting to bias. But a lack of transparency breeds mistrust. So let’s call it what it is. In some cases, the answer is straightforward: Fox News leans right, while MSNBC leans left. In more cases, though, the answer is complex, such as at outlets like CNN and The New York Times, which consider themselves unbiased but can slip into the liberal camp.

Clarity of sourcing. Who is sharing information, for what purpose, who is funding it, and how credible is the organization? Digital leaders must commit to transparency on this front. A number of initiatives—including the Trust Project and the Knight Commission on Trust, Media, and Democracy—are working on solutions. The purpose is not to suppress points of view. It is to ferret out bad actors, such as foreign countries interfering in the political system or home-grown scammers making a buck from viral posts.

We can restore public trust in the integrity of journalism. The journey starts with adherence to journalistic first principles, an overhaul of the information incentive system using aggressive technological solutions, and transparency in all ways.