NEWS

Decoder Newsletter: Partisanship & Section 230 Reform

Margaret Sessa-Hawkins | September 14, 2020

Produced by MapLight, the Decoder is a newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we'll send you coverage of the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. Know someone who might be interested? Ask them to sign up!

  • A newly-introduced Senate bill aims to reform Section 230 of the Communications Decency Act. The Online Freedom and Viewpoint Diversity Act -- which, unlike previous bipartisan efforts to modify the act, is sponsored only by Republicans -- would remove platforms’ liability protection under Section 230 if they restricted access to content that did not violate specific policies such as promoting terrorism. Not only could the bill end up empowering online trolls, but it’s also potentially unconstitutional. Paul Barrett, the deputy director of the New York University Stern Center for Business and Human Rights, has released an in-depth report on Section 230 which proposes less harmful ideas for the act’s reform.

  • AccountableTech has released a new Election Integrity Roadmap, supported by MapLight, to help social media platforms combat disinformation as the 2020 presidential election approaches. The roadmap lays out a series of steps social media platforms can take to stem the rising tide of disinformation. These proposals include implementing tiered penalties to target repeat offenders, empowering state election directors and NGOs with expertise in voting rights or disinformation to promote credible information and counter misinformation, and automatically flagging election-related posts from the highest-reach accounts for rapid human review. 

  • Facebook’s recent political ad ban completely misses the mark, Viviana Padelli writes for MapLight. She points out that the company’s business model promotes misinformation, that limiting ads a week before the election could actually increase disinformation, and that microtargeting is still continuing unchecked. The ban also does not address the platform’s continued failure to enforce its own policies, as demonstrated when it once again failed to remove a post by President Trump spreading electoral disinformation. 

  • A network of Facebook pages are spreading voting disinformation, Judd Legum and Tesnim Zekeria reported Thursday, despite the network having policies against the practice. Simultaneously, The Knight Foundation has found people who get their news from social media less likely to vote, and more likely to be skeptical of the electoral process. It’s not just voting misinformation either, in interviews with dozens of individuals in Kenosha, Wisconsin, Charlotte Alter writes in Time that about one in five people were espousing conspiracy theories common on social media. 

  • It’s timely then that The Brennan Center has released a report on combatting disinformation. The report examines how election officials, internet companies and the federal government can each play a part in lessening the effects of misinformation. Other recent suggestions for addressing disinformation include a profile of MediaWise for Seniors in the New York Times Friday, a call to make sure children are able to recognize and respond to misinformation, a platform by platform look at tackling disinformation, and fact-checkers proposing solutions Facebook could implement.

  • Twitter has announced that it will expand its civic integrity policies. The policy changes, which go into effect on September 17th, include labelling posts that contain false or misleading information about how to participate in elections, about the outcome of elections, or any posts that dissuade people from voting. Google will also be stopping the autocomplete function for searches on election integrity, as well searches where autocomplete could be interpreted as claims for or against a particular candidate. For those interested in comparing different social media platforms’ civic integrity policies, the Election Integrity Partnership has an overview and evaluation that’s worth looking at.

  • Top officials with the Department of Homeland Security directed analysts to downplay threats on Russian election interference and white supremacy, a newly-released whistle-blower complaint alleges. In the complaint, which was released Wednesday, Brian Murphy, the former head of the Homeland Security Department’s intelligence branch, says he was ordered to stop producing reports on Russia’s interference efforts in the election. He was also ordered to modify assessments to minimize the impression of the threat white supremasists posed, while turning attention towards “left-wing” groups. 

  • We shouldn’t forget how much damage domestic disinformation is doing though. In The New Yorker, Joshua Yaffa points out that while there is rightly immense concern over foreign interference in our elections, there isn’t the same worry about how much damage domestic digital disinformation is doing. In a similar vein another New Yorker article, this one by Sue Halpern, looks at how much data the Trump campaign’s mobile app is collecting -- even from those who haven’t consented to giving away their personal information.

  • Mark Zuckerberg appeared on Axios with HBO Tuesday. He denied that Facebook has become “a right-wing echo-chamber,” a reference to Kevin Roose’s New York Times story on conservatives’ popularity on the platform (Rebecca Heilweil also has an interesting analysis of Roose’s reporting over at Recode). Zuckerberg also said that he won’t take down anti-vaccination posts but will remove any posts that threaten election officials.

  • The European Commission has said that big tech must do more to tackle disinformation. An assessment of a self-regulatory code of practice that Facebook, Google, Twitter, as well as other large tech companies agreed to in order to reduce disinformation across their platforms noted several problems, including “inconsistent and incomplete application of the code across platforms and member states, lack of uniform definitions, existence of several gaps in the coverage of the code commitments, and limitations intrinsic to the self-regulatory nature of the code.” The commission’s vice president for values and transparency, Vera Jourova, has said more must be done to address these issues, and counter disinformation.

  • A Facebook software engineer resigned in protest Tuesday, saying the company was “profiting off hate in the US and globally.” In a 1,300 word, link-filled document, Ashok Chandwaney detailed complaints about the company refusing to provide Gambia with information that could help hold Myanmar accountable for genocide, its failure to take down a Kenosha Guard event page prior to a shooting there, and what he sees as the company’s consistent inaction on hate speech and racism except as a PR maneuver. Chandwaney concludes that in order to combat the company’s inertia, regulatory action is needed.