• Lab
  • AndroidForMobile Foundation at
    HOME
              
    LATEST STORY
    Notifications every 2 minutes: This in-depth look at how people really use WhatsApp shows why fighting fake news there is so hard
    ABOUT                    SUBSCRIBE
    Nov. 9, 2018, 9:46 a.m.
    Audience & Social

    Facebook Groups are “the greatest short-term threat to election news and information integrity”

    Plus: How “junk news” differs from “fake news,” and LinkedIn gets less boring (but not in a good way).

    The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers highlights of what you might have missed.

    Facebook has not “fixed it.” While , fake news-wise, appears to have happened on the United States’ Election Day this week, extensive research published this week by (whom I interviewed last year) shows what was still happening on Facebook three days before the midterms, in a .

    “It’s the scale of the problems, not the sum of the problems, that represents the greatest threat,” Albright writes. “The issues I’ve found on Facebook the past few months — through large-scale analytics, content analysis, extensive political ad archive querying, and upon the close inspection of thousands of posts and information-sharing activities — involve patterns that have been on the radar of the company’s leadership and American politicians since the last election. They’ve been revisited in scores of hearings, broadcast on television, and recited around the country in ‘how we’re fixing it’ slide decks.”

    Here are some of the things that Albright turned up:

    — “I found influential Facebook Pages, including the verified Pages of publishers and political funding groups being managed by accounts based outside the United States.” These are “influential Pages with foreign ‘manager’ accounts that have been running extensive political ad campaigns on Facebook, targeting users in the United States over the past six months.”

    — “Facebook’s political ad transparency tools — and I mean all of them — offer no real basis for evaluation.”

    — Bad actors have moved from visible Pages to private Groups. “Seeding political ideas and conspiracies on Facebook has never been that difficult. But they used to be easier to find. Groups change this dynamic, leveling the playing field for those who seek to peddle unreliable information, hyper-partisan news, rumors, and conspiracy theories. As of 2018, Groups now play a major role in manipulation, helping to push ideas at the right place and at the right time across the Facebook platform-at-large.” Essentially, “Facebook’s Groups offer all of the benefits with none of the downsides.”

    Albright writes:

    We can talk about how scary WhatsApp is in other countries, and how Twitter might play a leading role in the United States elections, but it is Facebook’s Groups — right here, right now  —  that I feel represents the greatest short-term threat to election news and information integrity.

    It’s still Fox News that’s the problem. Harvard’s , , and have a new book out, . Benkler talked to George Washington political scientist for The Washington Post:

    Farrell: You argue that the most serious problem is Fox News, not Facebook. Why so?

    Benkler: Because that’s where the eyeballs are.

    The highly asymmetric pattern of media ecosystems we observe on the right and the left, despite the fact that Facebook and Twitter usage is roughly similar on both sides, requires that we look elsewhere for what is causing the difference.

    Surveys make it clear that Fox News is by far the most influential outlet on the American right — more than five times as many Trump supporters reported using Fox News as their primary news outlet than those who named Facebook. And Trump support was highest among demographics whose social media use was lowest.

    Our data repeatedly show Fox as the transmission vector of widespread conspiracy theories. The original Seth Rich conspiracy did not take off when initially propagated in July 2016 by fringe and pro-Russia sites, but only a year later, as Fox News revived it when James Comey was fired. The Clinton pedophilia libel that resulted in Pizzagate was started by a Fox online report, repeated across the Fox TV schedule, and provided the prime source of validation across the right-wing media ecosystem.

    In 2017 Fox repeatedly attacked the national security establishment and law enforcement whenever the Trump-Russia investigation heated up. Each attack involved significant online activity, but the spikes in attention and transition moments are associated with Hannity, “Fox & Friends” and others like Tucker Carlson or Lou Dobbs.

    LinkedIn: No longer the boring one? “Facebook and Twitter’s crackdown on hate speech, false news, and manipulation has caused some people to move their political content sharing to LinkedIn The result is an increase in MAGA and #Resistance memes and intense, sometimes, vitriolic, political discussions,” for BuzzFeed. He found tons of examples, some of which LinkedIn did not remove after he flagged them: “The company did not remove the Obama birther post. LinkedIn also left up a post with a photoshopped image of Georgia gubernatorial candidate Stacey Abrams and Muslim activist Linda Sarsour that added the word ‘communist’ to an Abrams campaign sign.”

    LinkedIn CEO Jeff Weiner Axios earlier this year that “arguably the most important” way LinkedIn avoids fake news and platform abuse is “manual curation and the role of editors.” LinkedIn executive editor Daniel Roth , “When you write or share or comment on LinkedIn, your boss sees it, your employees see it, your future business partners see it. So people tend to be much more careful about what they say.” (I guess Facebook users must not care much what all their friends and family and coworkers and high school classmates think, then.)

    It seems more likely, though, that since Facebook has cracked down on fake news somewhat, people are just moving elsewhere, and now it’s LinkedIn reps who are making the statements about quality discourse and differences of opinion. “While most of our members do not share political content, we do believe that high-quality discourse that is relevant to our purpose, to create economic opportunity for every member of the global workforce, has a place on our platform,” a LinkedIn spokesperson told BuzzFeed News.

    What’s junk, anyway? The Oxford Internet Institute released the , which aggregates all of the midterm-related news shared on Facebook by partisan outlets like Breitbart, The Daily Caller, Shareblue, RawStory, and so on. Under its methodology, any post that one of those outlets posts to Facebook is junk. That includes, for instance, The Daily Caller’s “WATCH LIVE: President Trump holds a press conference after the 2018 midterm elections,” or Breitbart’s “Just In: ‘We are proceeding to a recount,’ Bill Nelson said,” or The Blaze’s “Beto falls short. Cruz gets it done.” But those, like many of the stories these sites share, are actual news stories — from partisan outlets, sure, but factual themselves.

    What is the point of this exercise, then? The Oxford Internet Institute saying that “Approximately 25 percent of shared content related to the midterm elections can be classified as junk news, compared to the 19 percent of shared content created by professional news outlets. Less than 5 percent of shared content came from government agencies, experts, or the candidates themselves.” (The 25 percent figure was ) But swap out “partisan” (“approximately 25 percent of shared content related to the midterm elections can be classified as PARTISAN news”), and this feels…unsurprising and obvious. It’s also not surprising that social content created by “government agencies” and “experts” performs worse on social media than content that’s explicitly designed to drive clicks.

    (We had a similar issue with a different Oxford study back in February. Back then, researchers classified sites like the National Review, the New York Daily News, Mediaite, the polling site Rasmussen Reports, and the conservative nonprofit Judicial Watch as “junk news.” If you look through , it appears that they don’t count anymore. But it also appears that Breitbart now makes up a literal majority of all “junk news” classified, which seems like a stretch. Of the 200 stories the aggregator returns in a search over the past week, 137 are from Breitbart and 25 are from The Daily Caller; no other site has more than 10.)

    “The term ‘junk news’ is not only about hyperpartisanship, but also about other problematic practices such as writing falsehoods, peddling conspiracy theories, inadequate fact checking, lack of transparency in terms of displaying article author names and editors names, etc.,” said Oxford’s , pointing me to the project’s . “The goal is to make the problem of junk news on social more transparent to the public, and to aid media literacy.”

    Left-leaning and right-leaning news sites cover fake news differently. Researchers how five “left-leaning” sites — HuffPost, PoliticusUSA, Daily Kos, The Guardian, and RawStory (yes, I agree it’s a little weird that the Guardian is included) and five “right-leaning” sites — Breitbart, Fox News, The Washington Examiner, The Daily Caller, and The Right Scoop — cover the topic of fake news and found that the left-leaning sites “focus on specific fake news stories and individuals involved,” while right-leaning sites “shift the focus to a narrative of mainstream media dishonesty more broadly.” The left-leaning also tended to use “more moderately positive emotion words and slightly fewer negative-emotion words than right-leaning sites.”

    Illustration from L.M. Glackens’ The Yellow Press (1910) via .

    POSTED     Nov. 9, 2018, 9:46 a.m.
    SEE MORE ON Audience & Social
    PART OF A SERIES     Real News About Fake News
    SHARE THIS STORY
       
     
    Join the 50,000 who get the freshest future-of-journalism news in our daily email.
    Notifications every 2 minutes: This in-depth look at how people really use WhatsApp shows why fighting fake news there is so hard
    “In India, citizens actively seem to be privileging breadth of information over depth…Indians at this moment are not themselves articulating any kind of anxiety about dealing with the flood of information in their phones.”
    Facebook probably didn’t want to be denying it paid people to create fake news this week, but here we are
    Plus: WhatsApp pays for misinformation research and a look at fake midterm-related accounts (“heavy on memes, light on language”).
    How The Wall Street Journal is preparing its journalists to detect deepfakes
    “We have seen this rapid rise in deep learning technology and the question is: Is that going to keep going, or is it plateauing? What’s going to happen next?”