NEWS

Decoder Newsletter: Awaiting the Oversight Board’s decision on Trump

Margaret Sessa-Hawkins and Viviana Padelli | February 17, 2021

Produced by MapLight, the Decoder is a newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we'll send you coverage of the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. Know someone who might be interested? Ask them to sign up!

  • Oversight?: The Facebook Oversight Board is expected to announce its decision on whether Donald Trump should be permanently suspended from the platform imminently. Ann Ravel, MapLight’s Digital Deception project director, submitted comments to the board arguing that Trump must be permanently banned. High-profile individuals including Alex Stamos, Facebook’s former chief security officer and Janai Nelson of the NAACP-LDF, also submitted a letter to the board urging them to uphold the platform’s decision. Members of the Real Facebook Oversight board submitted their own letter arguing the ban be made permanent. 

  • How did we get here: In light of the pending Trump decision, The New Yorker published a deep-dive piece by law professor Kate Klonick on the formation and function of the Oversight Board. Klonick was granted access to the board by Facebook. Reacting to the piece, Zeynep Tufecki of UNC wrote a Twitter thread on the importance of continually questioning the idea of the board’s legitimacy. Klonick responded with a thread on why she felt doing the project was important, even though it involved ‘access journalism’. Elettra Bietti of Harvard Law then added perspective on the challenges faced by those questioning the status quo.

  • Less political: Facebook has started rolling out efforts to make Newsfeeds less political -- even though, as The New York Times’ Kevin Roose pointed out, it hasn’t defined what is and isn’t ‘political’ (although Covid-19 information from governments and health sources will be exempt). Enforcement could also be problematic, as the algorithms the company pledges to use are error prone, as an article in The New York Times examining how a bias against adaptive clothing ads demonstrates that machine learning always contains a world view demonstrates. Meanwhile, Twitter is looking to build an app store where you can choose your recommendation algorithm.

  • Transparency report: Facebook released its fourth-quarter content moderation report Thursday. Protocol took a look at one key aspect left out of the report: the way the company’s own algorithms consistently promote policy-violating content. Also missing from the report? Any mention of coordinated inauthentic behavior having to do with the Capitol Riots. As Casey Newton pointed out though, that’s probably because the behavior, while coordinated, was still ‘authentic’. As stressed by Evelyn Douek in July, the fact that there’s still no clear definition of coordinated inauthentic behavior is also problematic.

  • Return of Parler: Social network Parler is back online, with SkySilk hosting. In a statement on its policies, SkySilk said that it, “advocates the right to private judgment and rejects the role of being the judge, jury, and executioner.” The interim Parler CEO is Mark Meckler, founder of Tea Party Patriots, a group that helped to organize the march that eventually led to the Capitol Riots, Alex Kotch has more on the new head of the app. 

  • Bad enforcement: A new investigation by The Markup using its Citizen Browser project found that Facebook rarely marks disinformation on its platform as false -- especially when it comes from high-profile accounts. Instead, the platform would usually add flags directing users to election information, or noting that posts were ‘missing context’. The company also didn’t seem to be reducing the spread of flagged posts, as it had said it would.

  • Birdwatch: In an NPR report on Birdwatch, Twitter’s crowdsourcing tool for fighting disinformation, Madelyn Webb, a researcher at First Draft, notes that so far, ‘It's sort of just replicating what we see on Twitter’. Not exactly a ringing endorsement.

  • Covid Clubhouse: In last week’s Decoder we took a brief look at the invite-only nature of Clubhouse. Now, Vice reports that the app is becoming a hotspot for covid misinformation. Bloomberg Business profiled some of the Black doctors trying to combat health disinformation on the app — a side-gig that is leading to bullying, harassment, and emotional and physical drain on already beleaguered medical personnel. In China, however, it has proved a haven to talk about contentious topics.

  • Banned?: Instagram permanently removed the account of Robert F. Kennedy Jr. after he shared multiple false claims about the coronavirus and vaccines. However, like many social media policies, the enforcement of Kennedy’s ban has been inconsistent.  The Center for Countering Digital Hate notes that his organization, Children’s Health Defense, also spreads misinformation, and has not been banned. The Real Facebook Oversight Board further found that his account on Facebook (which owns Instagram) is also still up.

  • Twitter & India: Government reaction to protests in India has sparked a very high-profile debate about free speech. The incident began when Twitter suspended hundreds of accounts associated with the protests on a government demand which cited misinformation and incitement to violence. The company then unsuspended the accounts following a backlash. The fight escalated to the point that India threatened the company with fines and jail time and Twitter responded with a strong blog post. In The New York Times On Tech newsletter, Shira Ovide speaks to former U.N. special rapporteur on free expression David Kaye about the debate and its implications.

  • DSA update: The European Union’s Data Protection Supervisor has been pushing for a ban on ads targeted based on tracking of users’ digital activity be including in the Digital Services Act.

  • Research: Facebook has announced that it will be sharing two new datasets with researchers, including a new ad targeting data set, and an ad library data set. The Atlantic Council’s Digital Forensic Research Lab has put together a timeline of the Stop the Steal movement, looking specifically at events that led to the Capitol riots, and groups involved. A new report from the Anti-Defamation League has found that YouTube is still recommending extremist videos to users susceptible to racial hatred. First Draft news has an article looking at the role broadcast television played in amplifying Trump’s digital disinformation. BBC Monitoring research has found that anti-vaxx accounts saw a surge in followers during the pandemic.
MORE NEWS
Viviana Padelli and Margaret Sessa-Hawkins | February 22, 2021
Viviana Padelli and Alec Saslow | February 12, 2021
Margaret Sessa-Hawkins | February 08, 2021