• Lab
  • AndroidForMobile Foundation at
    HOME
              
    LATEST STORY
    Notifications every 2 minutes: This in-depth look at how people really use WhatsApp shows why fighting fake news there is so hard
    ABOUT                    SUBSCRIBE
    Nov. 7, 2018, 10 a.m.
    Audience & Social

    YouTube helps a majority of American users understand current events — but 64 percent say they see untrue info

    When a one-hour outage on the platform can result in a 20 percent net hike in traffic to publishers’ websites, YouTube’s got a special share of the attention economy.

    As much as we rag (mmm, rightfully!) on the major tech platforms for their algorithms getting “don’t amplify disinformation” wrong, YouTube as a platform occupies a very peculiar spot. Unlike its more social peers, YouTube isn’t primarily about making meaningful connections, snippets of snark, or perfected selfies. It’s closer to a pure consumption platform, at least the way most people use it, and it’s unusually directed toward usefulness.

    Are you actually wasting time on YouTube when you’re watching a cooking video instead of scrolling/tapping mindlessly through one of your various News Feeds elsewhere? Is it pacifying your grabby infant so you can be an adult and clean the bathroom? Are you going to learn how to knit or repair something in your home any other way? See, useful.

    YouTube, which recorded last year, also has the downsides of drawing some users into , surfacing , and amplifying creators like who stupidly . Not so useful.

    Still, when a one-hour outage on the platform can result in a 20 percent jump in traffic to publishers’ websites (compared to a 2.3 percent increase when Facebook was down), YouTube’s got a special share of the attention economy.

    The on just how useful YouTube is — including its recommendations algorithm, which apparently drives . 35 percent of all U.S. adults use YouTube, and 51 percent of those say YouTube has helped learn how to do something for the first time, according to a drawing on 4,500 Americans. The percentage of YouTube users who say they get news or headlines there has doubled since 2013 ().

    YouTube also plays a big role in occupying those who aren’t yet of reading age. 81 percent of all parents with kids age 11 and under have used YouTube to placate their spawn at least once; more than a third allow their kid to watch videos on the platform regularly. The Pew report points out that YouTube, by , is intended for those age 13 and older, though is supposed to be a safer version of the platform.

    There’s still plenty of questionable content on YouTube, and a majority of respondents noted that they often encounter “troubling or problematic” videos. 60 percent told Pew that they end up watching videos of “dangerous or troubling behavior,” and 64 percent see videos that “seem obviously false or untrue.” This persists in the kids content as well: One example was a three-year-old boy coming across “PAW Patrol Babies Pretend to Die Suicide by Annabelle Hypnotized.” This is pretty much the opposite of useful.

    Crises like the PAW Patrol incident uncovered by the Times, not to mention a whipsawing 2017 for the platform — the downfall of its biggest star, the apparently anti-Semitic gamer PewDiePie, and a near-boycott from big brands whose advertising was running alongside racist videos — spurred YouTube to release a transparency report in May. Users have always had the opportunity to flag inappropriate content, as we wrote at the time, but it turns out YouTube didn’t rely too heavily on those signals:

    YouTube’s latest transparency report tells us a great deal about how user flags now matter to its content moderation process — and it’s not much. Clearly, automated software designed to detect possible violations and “flag” them for review do the majority of the work. In the three-month period between October and December 2017, 8.2 million videos were removed; 80 percent of those removed were flagged by software, 13 percent by trusted flaggers, and only 4 percent by regular users. Strikingly, 75 percent of the videos removed were gone before they’d been viewed even once, which means they simply could not have been flagged by a user.

    On the other hand, according to this data, YouTube received 9.3 million flags in the same three months, 94 percent from regular users. But those flags led to very few removals. In the report, YouTube is diplomatic about the value of these flags: “user flags are critical to identifying some violative content that needs to be removed, but users also flag lots of benign content, which is why trained reviewers and systems are critical to ensure we only act on videos that violate our policies.”

    Pew researchers also explored the recommendation algorithm, which 81 percent of those polled say at least “occasionally” drives their video consumption choices. Here’s what they found:

    • 28 percent of the videos they encountered were recommended multiple times, “suggesting that the recommendation algorithm points viewers to a consistent set of videos with some regularity.”
    • YouTube recommends longer and longer content over time. The researchers started with videos that were 9:31 long, on average, and by the fourth recommendation were directed to a nearly 15-minute-long video.
    • The algorithm also pointed users toward more and more popular videos. More than two-thirds of the recommended videos had more than 1 million views. The average number of views per recommended video went from 8 million in the starting round to 30 million views on average in the first recommended video and more than 40 million views on average at the fourth recommended video.

    Video has not proven effective as the next! hot! thing! for publishers to pivot to, as demonstrated by Facebook’s video hype-and-fail. But the YouTube niche is there, and it’s definitely not cold. Nearly one in five respondents told Pew YouTube helps them understand things happening in the world — you know, current events and news, to name a few.

    Earlier this year, YouTube announced its plan for improving the platform’s news discovery experience. It includes $25 million in grants for news organizations to build out their video operations and experiments with boosting local news in YouTube’s connected TV app — not to mention adding text-based news article snippets from “authoritative sources” alongside search results in breaking situations — but TBD on that initiative’s success. If YouTube really wants to be the most useful platform, it might want to make sure it’s not scarring children for the rest of their lives or radicalizing someone who just wants to learn how to clean a gun.

    Image from used under a Creative Commons license.

    POSTED     Nov. 7, 2018, 10 a.m.
    SEE MORE ON Audience & Social
    SHARE THIS STORY
       
     
    Join the 50,000 who get the freshest future-of-journalism news in our daily email.
    Notifications every 2 minutes: This in-depth look at how people really use WhatsApp shows why fighting fake news there is so hard
    “In India, citizens actively seem to be privileging breadth of information over depth…Indians at this moment are not themselves articulating any kind of anxiety about dealing with the flood of information in their phones.”
    Facebook probably didn’t want to be denying it paid people to create fake news this week, but here we are
    Plus: WhatsApp pays for misinformation research and a look at fake midterm-related accounts (“heavy on memes, light on language”).
    How The Wall Street Journal is preparing its journalists to detect deepfakes
    “We have seen this rapid rise in deep learning technology and the question is: Is that going to keep going, or is it plateauing? What’s going to happen next?”