• Lab
  • AndroidForMobile Foundation at
    HOME
              
    LATEST STORY
    Notifications every 2 minutes: This in-depth look at how people really use WhatsApp shows why fighting fake news there is so hard
    ABOUT                    SUBSCRIBE
    Nov. 5, 2018, 9:43 a.m.

    Unlike in 2016, there was no spike in misinformation this election cycle

    The “Iffy Quotient” has been downright steady leading up to tomorrow’s midterm elections, and Facebook deserves some credit for it.

    A newsy photo of a public figure shows up on your social media feed, with a clickbait-y headline and a provocative comment, all linking to a site with juicy political content. Did you share it?

    Somebody did.

    It wasn’t a paid ad, or even recommended-for-you content — it was shared by someone you know. The link didn’t take you to InfoWars or Occupy Democrats — you would’ve noticed that. Maybe it went to Western Journal or another unfamiliar domain whose name sounds legit. Did you comment on it or retweet it?

    A lot of somebodies did. Often .

    and deep-pocketed influence campaigns spread plausible misinformation — what I like to call “iffy” content — as a cost-effective way to advance their social or political cause. Others spread misinformation just to .

    Meanwhile, the big social media platforms struggle to implement fair editorial practices — disclosures and demotions, blocks and bans — to attenuate the spread of misinformation rather than amplify it.

    How well have Facebook and Twitter done? Are they helping iffy content reach large audiences? At the University of Michigan , we have started keeping score, going back to early 2016.

    We compute a daily “Iffy Quotient” — the fraction of the 5,000 most popular URLs on each platform that came from a large list of sites that frequently originate misinformation and hoaxes, a list maintained by . The Iffy Quotient is a way for the public to track the platforms’ progress — or lack thereof.

    Measuring iffy content

    We saw a major uptick in the run-up to the 2016 U.S. presidential election. Iffy content approximately doubled from January to November.

    Engagement with iffy content fell off precipitously after the election. Questionable content peaked again in February 2017, tracking public dialogue over the presidential transition and early executive orders.

    Twitter did a better job than Facebook of not amplifying iffy content going into 2017, then Facebook started to improve. By the middle of 2018, Facebook’s Iffy Quotient was lower than it had been in mid-2016, and most days it was lower than Twitter’s.

    Why did things get bad back in 2016? One reason for the uptick is that users are more politically activated during an election cycle. That boosts interest in political news — especially in sensational political news. Supply rises to meet that demand — from legitimate sources but also from both propagandists and opportunists seeking ad revenue.

    Assuming that the publishers and disseminators of misinformation are as competent and motivated in 2018 as they were in 2016, we expected the Iffy Quotient to spike in September and October. But it didn’t.

    What’s different? We can’t tell for sure. Perhaps the suppliers of such content lost interest, though that seems unlikely. Perhaps the American public got more sophisticated and is less prone to click on or share links to iffy sites. Sadly, that also seems unlikely, though it is a nice long-term aspiration.

    The most important difference is probably countermeasures taken by the platforms. Twitter executive Colin Crowell wrote on the in 2017, “We’re working hard to detect spammy behaviors at source, such as the mass distribution of Tweets or attempts to manipulate trending topics.” Fake accounts can be used to make content look more popular than it really is, leading the platforms to show the content to more people. Weeding out accounts that engage in such behavior reduces the opportunities for such manipulation.

    Facebook has also actively tried to reduce manipulation opportunities by — 583 million of them in the first quarter of 2018. In addition, in December 2016, Facebook announced a , sending them questionable stories and showing lower in the feed those that the fact-checkers labeled as false.

    On Jan. 11, Facebook announced that it would , in favor of native posts from friends and family. On its own, that wouldn’t affect the Iffy Quotient, which is based on whatever public content is most popular. However, that announcement and also implied other changes that might have affected the Iffy Quotient. One was prioritizing content around which people interacted with friends; it could be that people interact less around content from iffy sites. Another was prioritizing news that the community rates as trustworthy, that people find informative, and that is local.

    Holding platforms accountable

    Media companies already maintain internal suites of metrics, such as monthly pageviews, clickthrough rates, dwell times, and ad revenue. These metrics strongly influence decisions about changes to products and policies. Typically, product managers are rewarded for improving some primary metric, subject to the constraint that there is at most a modest decline in other metrics.

    Externally maintained metrics, like our Iffy Quotient, offer two advantages over internal metrics maintained by the platforms. First, they can draw attention to issues that platforms may either not be tracking themselves or not prioritizing as much as the public would like. This form of public accountability focuses attention on the overall performance of platforms rather than on bad outcomes in individual cases; some bad outcomes may be inevitable given the scale on which the platforms operate.

    Second, external metrics can create public legitimacy for claims that platforms make about how well they are meeting public responsibilities. Even if Facebook actually reduces the audience share for iffy content, the public may be skeptical if Facebook defines the metric, conducts the measurement without audit and chooses whether to report it.

    In the 2016 election season, Twitter and especially Facebook performed poorly, amplifying a lot of misinformation. In the 2018 cycle, Facebook has performed somewhat better, but Twitter needs to up its game. Facebook, I salute you. For now. But we’ll keep watching, and .

    is a professor of information at the University of Michigan. This article is from under a Creative Commons license.The Conversation

    Fake news illustration by used under a Creative Commons license.

    POSTED     Nov. 5, 2018, 9:43 a.m.
    SHARE THIS STORY
       
     
    Join the 50,000 who get the freshest future-of-journalism news in our daily email.
    Notifications every 2 minutes: This in-depth look at how people really use WhatsApp shows why fighting fake news there is so hard
    “In India, citizens actively seem to be privileging breadth of information over depth…Indians at this moment are not themselves articulating any kind of anxiety about dealing with the flood of information in their phones.”
    Facebook probably didn’t want to be denying it paid people to create fake news this week, but here we are
    Plus: WhatsApp pays for misinformation research and a look at fake midterm-related accounts (“heavy on memes, light on language”).
    How The Wall Street Journal is preparing its journalists to detect deepfakes
    “We have seen this rapid rise in deep learning technology and the question is: Is that going to keep going, or is it plateauing? What’s going to happen next?”