×

Dartmouth Professor Reviews Facebook’s ‘Fake News’ Tools

  • Brendan Nyhan. (Dartmouth College - Eli Burakian)



Valley News Staff Writer
Wednesday, November 22, 2017

Hanover — With more than half of American adults relying on social media for some of their news in 2017, according to the Pew Research Center, it’s fair, but also a bit troubling, to say that the nation’s democratic health derives, in part, from that Facebook post you just liked.

The public got a taste of Facebook’s importance during the 2016 elections, where intentionally false stories about political candidates reached millions of people and, in one notable case, sent an armed man to a Washington pizzeria in search of a nonexistent child sex ring.

To help Facebook address this problem, Brendan Nyhan, a professor of government at Dartmouth College, has released a study evaluating the effectiveness of new tools the company is using to combat fake news.

“2016 showed that the checks against fake news and misinformation were too weak and we need to make them stronger,” Nyhan, who studies misperceptions and misinformation in politics, said in an interview last week.

Reporting after the election revealed that those stories, most famous among them a fake report that Pope Francis had endorsed Donald Trump, came from a variety of sources, including homegrown fraudsters, state-sponsored content farms in Russia, and moneymaking operations run by teenagers in Macedonia.

“There’s no solution that will get rid of this problem,” he said. “It’s something we’ll have to manage as a society. ... The price of free speech is that we will have false information.”

In the meantime, Facebook has begun adding “disputed” tags to fake stories that appear in the streams that its users scroll through, otherwise known as feeds, that contain news stories and updates from friends.

Nyhan and several student collaborators this spring surveyed just under 3,000 participants on their reactions to anti-fraud efforts like Facebook’s, with somewhat encouraging results.

The researchers found that “disputed” warnings decreased participants’ confidence in labeled stories, but also that tags with more direct language such as “rated false” were even more effective. Slightly more than a quarter of respondents rated an unlabeled false headline as “somewhat” or “very accurate” in the study, but only 19 percent expressed that level of confidence when the headline was tagged “disputed.” Even fewer people — 16 percent — thought headlines “rated false” were accurate.

In addition to its accuracy tags, Facebook has been trying to educate users about fake news by placing articles in their feeds that warn them to “remain skeptical” about the articles they see online. Less optimistically, Nyhan’s team also found that general warnings about fake news tend to decrease confidence in all news, including accurate information.

Unlike some other fake news research, the Dartmouth study did not find what’s called an “implied truth” effect, where users express increased confidence in unlabeled stories when they know about the presence of accuracy labels.

Another recent study, by Gordon Pennycook and David Rand of Yale, had more respondents who rated more articles than Nyhan’s, which allowed them to find a “very small” but still statistically significant implied truth effect, Nyhan said last week.

Nyhan is not the only faculty member at Dartmouth who has drawn attention for efforts to detect sniff out fake material on the internet.

Hany Farid, a computer scientist and expert detector of manipulated images, helped to refine a program that detects, verifies and flags child pornography online. He also recently developed a digital tool that finds and reports violent extremist posts on social media outlets such as Twitter, an unwilling haven for Islamic State members, among other militant groups.

Nyhan said he and Farid had talked about combining their efforts, perhaps to research fake digital images circulating online in politics, though that idea hasn’t yet come to fruition.

The next logical step — writing a computer program that can read news stories and determine whether they’re fake or not — may be a long way off, if not completely unattainable, according to Nyhan, who called the idea “virtually inconceivable.”

“I don’t think we can program a computer to tell us what (the truth) is,” he said.

Farid expanded on that thought this week, writing in an email Wednesday that the “the accurate and automatic detection of fake news faces significant challenges.”

Those challenges include the volume and speed at which information is posted online, the difficulty of determining what is objectively false versus partly false or merely misleading, and the possibility that automatic algorithms could be gamed by ill-meaning adversaries, Farid said.

“I imagine that any effective solution will require a combination human and computer-based interventions in which possibly imperfect computer algorithms flag stories for human review before they are removed or flagged as untrustworthy,” Farid said. “This will in turn require a significant re-thinking of editorial policies at platforms like Facebook, Google, and Twitter.”

Nyhan also warned against social media giants going beyond flagging outright false and fraudulent information — against giving readers tips about which more conventional news sources are more reliable or provide better information, for instance.

“That’s a really tricky issue,” he said. “I think we should be wary about delegating that kind of informational authority to a private company.”

Facebook, he added, is being asked “to do the impossible — to sort through thousands or millions of pieces of information and warn people what’s true or false in a way that no one knows how to do.”

Given the difficulty there, it pays to be “very circumspect” about the scope of Facebook’s fact-checking mission, he said.

Nyhan expressed mixed feelings about the effect of public pressure on Facebook’s efforts to police its information streams. The company was showing some signs of improvement, he said, but still lacked transparency about internal workings that have a direct effect on the public.

Facebook has reported, for instance, that its anti-fraud efforts are reducing the number of times fake stories are loaded onto screens — a figure called “page impressions” that can roughly estimate how often people read fake stories.

But the company isn’t providing enough information to judge whether users are less interested in fake news or Facebook itself is tweaking the system that shows fake news to users.

“We’re going to have to face some hard questions about the power that tech giants wield,” Nyhan said. “We’re delegating a tremendous amount of power ... to black-box algorithms within those companies that even they don’t fully understand.

“We should be uncomfortable with that idea.” 

Rob Wolfe can be reached at rwolfe@vnews.com or 603-727-3242.