• Lab
  • AndroidForMobile Foundation at
    HOME
              
    LATEST STORY
    How can local TV news fix its young person problem? Maybe it needs to look more like Vox
    ABOUT                    SUBSCRIBE
    Feb. 4, 2019, 1:28 p.m.
    LINK:   |   Posted by: Christine Schmidt   |   February 4, 2019

    Happy , Facebook: Snopes quit your fact-checking partnership.

    Poynter’s Daniel Funke that has pulled out of the third-party debunking squad Facebook enlisted in 2016. The Associated Press is not currently fact-checking for it either (but apparently hasn’t fully quit), .

    Snopes, the 25-year-old fact-checking site, said Facebook’s system was too manual — not automated enough — for the 16-person organization. “With a manual system and a closed system — it’s impossible to keep on top of that stuff,” Snopes’ VP of operations told Poynter. “It doesn’t seem like we’re striving to make third-party fact checking more practical for publishers — it seems like we’re striving to make it easier for Facebook. At some point, we need to put our foot down and say, ‘No. You need to build an API.'”

    (Snopes has its own , which it counts as another reason to focus more fully on its own fact-check work rather than the Facebook partnership.)

    My colleague Laura Hazard Owen explained how the fact-checking dashboard operated in 2016:

    Because the group of third-party fact-checkers is small at launch, and as part of its effort to focus on the highest-impact “worst of the worst,” Facebook is doing some sorting before the reported stories go to the fact-checkers. Its algorithm will look at whether a large number of people are reporting a particular article, whether or not the article is going viral, and whether the article has a high rate of shares. Facebook has also already had a system in place, for about a year, that uses signals around content (such as how people are responding to it in comments) to determine whether that content is a hoax.

    Last year, Facebook about how the process works:

    • We use technology to identify potentially false stories. For example, when people on Facebook submit feedback about a story being false or comment on an article expressing disbelief, these are signals that a story should be reviewed. In the US, we can also use machine learning based on past articles that fact-checkers have reviewed. And recently we gave fact-checkers the option to proactively identify stories to rate.
    • Fact-checkers provide a rating and reference article.Independent third-party fact-checkers review the stories, rate their accuracy and write an article explaining the facts behind their rating.
    • We demote links rated false and provide more context on Facebook. If a story is rated false, we reduce its distribution in News Feed. (See more on how News Feed ranking works.) We let people who try to share the story know there’s more reporting on the subject, and we notify people who shared it earlier. We also show the fact-checker’s reference article in Related Articles immediately below the story in News Feed.

    The dream team in 2016 began with Snopes, PolitiFact, Factcheck.org, ABC, and the AP, and now has 34 members in countries around the world. But recent skirmishes emerged between partisan news outlets ThinkProgress and The () Weekly Standard, highlighting some issues with the platform’s approach. An AP spokesperson told TechCrunch that they “fully expect to be doing fact check work for Facebook in 2019” but that the company is still in talks with Facebook about what that looks like. PolitiFact and AFP confirmed they are staying on, and a Facebook spokesperson said they’re confident in their approach and plan to expand the fact-checking partnership with more members and languages this year.

    Some of the fact-checking partners are with the way the program has turned out, though it hit bumps rolling out just after the 2016 election. (The fact-checkers are paid — Snopes received $100,000 from Facebook in 2017 for the work; France’s Libération got $100,000 in 2017 and .) But you’d think if Facebook really wanted to make a dent, they’d attack misinformation by publishers, not by individual posts (, etc.) Laura brought that up in 2016:

    The key issue and possible pain point, which isn’t addressed in the changes Facebook outlined Thursday, is that reporting happens on a per-post level, rather than on the publisher level. Since Facebook is focusing specifically on “clear hoaxes spread by spammers” here, it seems as if it would be more efficient to simply block . But that seems to be more of a blanket approach than Facebook is willing to take at this point, and it would likely open the company up to a great deal of backlash.

    At any rate, opinions are divided about how effective the program has actually been.

    Show tags Show comments / Leave a comment
     
    Join the 50,000 who get the freshest future-of-journalism news in our daily email.
    How can local TV news fix its young person problem? Maybe it needs to look more like Vox
    “While remixing the stories did not resonate every time, we did see positive results on the group of hard news stories where we altered the storytelling approach.”
    If Facebook wants to stop the spread of anti-vaxxers, it could start by not taking their ad dollars
    “You have nothing to be ashamed of for your parents not vaccinating you. It wasn’t something you researched and decided against, you were just doing the whole ‘being a kid’ thing.”
    Clicks are an “unreliable seismograph” for a news article’s value — here’s new research to back it up
    “People frequently click on stories that are amusing, trivial, or weird, with no obvious civic focus. But they maintain a clear sense of what is trivial and what matters.”