• Lab
  • AndroidForMobile Foundation at
    HOME
              
    LATEST STORY
    Notifications every 2 minutes: This in-depth look at how people really use WhatsApp shows why fighting fake news there is so hard
    ABOUT                    SUBSCRIBE
    Oct. 31, 2018, 12:08 p.m.

    This new project wants to do for news trust what FiveThirtyEight does for polls: Aggregate a bunch of signals into something meaningful

    “The goal is to make those signals more useful and to help platforms…make better, more informed decisions about ranking and ad purchases — which we hope will help drive both promotion and financial support to quality news and away from disinformation, misinformation, and junk.”

    Any attempt to declare which news sources are good and which are bad is a task freighted with danger.

    Unless you’ve been fooled by “fake news” rhetoric, you know that The New York Times, The Times of London, The Washington Post, the BBC, The Wall Street Journal, The Guardian, and other quality outlets are — despite the mistakes they each make daily and the distinct worldviews they each represent — organizations with skilled journalists, strong news values, and an honest desire to get the facts right and present their readers with something that approximates the real world.

    If you’re a media-literate consumer of news, you also know that there are partisan outlets on both left and right that view advancing a political position as a goal that stands above perfect accuracy. And you know that there are outlets that, even aside from any ideological motive, just aren’t that good at getting facts right.

    But creating a systematic way to make those distinctions — not to mention the many distinctions that lie within each phylum and genus — is a complicated task that leaves you open to critiques both honest and otherwise. won’t cut it. How, exactly, is Fox News different from MSNBC? What’s the difference between The Telegraph and the Daily Mail, The New Republic and Occupy Democrats? Where do you draw the line between trustworthy and worth avoiding?

    There are any number of efforts to make these sorts of distinctions, many of which we’ve written about before. Some create a rules-based protocol publishers can choose to accept in order to receive a stamp of approval; others ask a team of journalists to systematically separate the wheat from the fake-news chaff. But their results can vary, and no matter what they say, it’ll take a technology giant — Facebook, Google, Apple — adopting one of them for these ratings to have any sort of mass impact.

    Into that problem walks CUNY’s Tow-Knight Center, which wants to figure out a way to combine all those efforts’ signals to create the one true trust indicator. Or, at least, something like it. :

    With so much attention being paid to trying to limit the spread of disinformation, not enough attention and resource is being devoted to supporting quality in news. Platforms and advertisers can bring attention and revenue to quality news but they need help deciding what quality news is. There are many good and independent efforts to create signals of quality, but we have heard from technology and ad companies that it is difficult for them to make use of data from so many sources

    We saw a need to map the work of these independent providers and aggregate the signals they generate so that technology and ad companies can make better use of them. That is what we are doing. We are not creating a whitelist or a one-size-fits-all quality score but instead are trying to help companies make better use of the signals that exist to make better judgments themselves…

    We are also not starting yet another trust/quality/credibility project to compete with the many good efforts that already exist. To the contrary, we saw the need to aggregate the signals from all those efforts, making them more impactful by putting all this data in a form that will make it easier for technology and ad companies to ingest and use it. The more these signals are used by platforms and advertisers, the more benefit that can come to news organizations (through audience and revenue), and the more news organizations are motivated to ascribe to standards of quality being formulated by such efforts as the Trust Project and another from Reporters Without Borders. That is the virtuous circle we hope to help enable.

    Think of it as a Nate Silver-esque . A rather than a single study.

    Most of the actual signal-aggregating work will be done by a company called (slogan: “We watch the web so you don’t have to”), and the will also have a role. (It’s also worth noting that part of the funding for this as-yet-unnamed effort comes from Facebook’s Journalism Project, though it is “assured independence and is not in the service of Facebook or any single company. The fruits of our efforts will be provided to platforms, ad companies, and others that can make use of it.”)

    Today’s announcement, written by CUNY’s Jeff Jarvis and Trust Metrics’ Jesse Kranzler, argues that the Facebooks and the Googles of the world need to stop hiding behind the banner of platform neutrality and start implementing more significant controls over the spread of misinformation:

    When it comes to such content, in our opinion, neutrality is no longer an option and every major player in these ecosystems is forced to make judgments about sources, because some small but impactful number of those sources is attempting to manipulate technology and ad companies and ultimately the public conversation.

    But this new effort also aims at supporting the high end, not just quarantining the lowest of the low:

    Just as thousands are being hired to grapple with the low end of the quality spectrum, the population of journalists continues to shrink. More resources and more support must be given to quality news. That is why, in our first phase, we are directing more of our attention to the higher end of the spectrum.

    Among the sorts of signals they aim to aggregate: outside evaluations by experts; publisher-generated statements of principals; endorsements by respected professional organizations; some measure of public trust in individual outlets; the corpus of fact-checks performed on the work of different publishers; and more.

    This is of course a super-complicated question and any number of devils will be in any number of details. For instance, one of the things that Trust Metrics says it will be looking for when rating the quality of a news site is…profanity. Which seems pretty schoolmarmish? It notes that the effort will try to contextualize those stray swear words:

    A site geared towards an audience of younger adults might have a higher tolerance for what others might regard as profanity given its target demographic (e.g. Vice), but the same density of certain language might not be acceptable on a site geared towards a different audience…Profanity occurring in a quote or in the context of objective news is not an issue.

    Which, okay, but whether a news organization is willing or unwilling to print “shit” seems quite distinct from its trustworthiness for me. (Then again, maybe I’m just in a “high-tolerance” demographic.)

    In any event, it strikes me as a good thing that someone is trying to distill all these signals into something coherent; if tech platforms are ever going to adopt any sort of systematic trust metric, it’ll likely take something that comes out of a broader industry consensus, not a single rating. But it still very much strikes me as a heavy, heavy lift.

    Noise-to-signal illustration based on work by used under a Creative Commons license.

    POSTED     Oct. 31, 2018, 12:08 p.m.
    SHARE THIS STORY
       
     
    Join the 50,000 who get the freshest future-of-journalism news in our daily email.
    Notifications every 2 minutes: This in-depth look at how people really use WhatsApp shows why fighting fake news there is so hard
    “In India, citizens actively seem to be privileging breadth of information over depth…Indians at this moment are not themselves articulating any kind of anxiety about dealing with the flood of information in their phones.”
    Facebook probably didn’t want to be denying it paid people to create fake news this week, but here we are
    Plus: WhatsApp pays for misinformation research and a look at fake midterm-related accounts (“heavy on memes, light on language”).
    How The Wall Street Journal is preparing its journalists to detect deepfakes
    “We have seen this rapid rise in deep learning technology and the question is: Is that going to keep going, or is it plateauing? What’s going to happen next?”