NEWS

Decoder Newsletter: Skepticism Over Facebook’s Election Policies

Alec Saslow | September 04, 2020

Produced by MapLight, the Decoder is a newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we'll send you coverage of the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. Know someone who might be interested? Ask them to sign up!

  • The big news this week is Facebook’s new initiatives to safeguard the election. The policy changes, unveiled in a post by CEO Mark Zuckerberg, include a ban on new political advertisements the week before the election, limiting the forwarding of election information on messenger, and an expansion of policies meant to combat online voter suppression. The initiatives have their shortcomings though -- they only ban new ads the week before the election, they don’t address the organic disinformation on the site, and they leave in place the systemic issues that spread this disinformation. Limiting forwarded messages, however, may be more effective. Regardless of all that, Zeynep Tufekci tweets that this incident should be a reminder of how much power one man, Mark Zuckerberg, is wielding over the way political campaigns and communications now operate.

  • MapLight’s Take: It’s helpful for Facebook to at least acknowledge it must change its policies to slow the spread of false and deceptive information that harms our democracy, but these changes amount to little more than a drop in the ocean especially when Facebook’s record on enforcement of existing policies is lackluster at best. Facebook is designed to prioritize engagement and outrage to boost overall profits, placing it at odds with containing harmful and manipulative content.

    Millions of people will have already voted prior to the seven day window in which Facebook is prohibiting new political ads, and false and deceptive information will continue to circulate so long as it comes in the form of organic posts or ads that are already up and running. Since Facebook has failed to make meaningful changes to ad targeting, campaigns will still be able to narrowly target groups and individuals with messages without public scrutiny. This latest announcement proves yet again Facebook is incapable of effective self-regulation. Congress, which answers to the people and not to shareholders, must step in to provide clear laws to create a healthier online ecosystem. 

  • Despite its claims, Facebook did not remove an event page that called for violence in Kenosha. Internal company discussions obtained by Buzzfeed News show that the “Armed Citizens to Protect our Lives and Property” event, which Facebook claimed to have removed the day after the shooting, was actually taken down by one of the Kenosha Guard page administrators. Before being taken down, the event prompted 455 complaints to Facebook, but was still judged to not violate the company’s policies. In The Verge, Casey Newton has called for Facebook to issue a public report on the incident. A coalition of activists also called on Facebook to implement a number of policy changes, including banning pages that encourage people to bring weapons to events.

  • The Russian organization which interfered in the 2016 election is at it again, according to Facebook and Twitter. On Tuesday, the social media platforms said that the Internet Research Agency had set up a network of fake accounts and websites meant to look like a left-wing news site. The news site, called Peace Data, then hired U.S. journalists to write articles critical of Joe Biden and Kamala Harris. The New York Times interviewed one of the journalists, who said there were warning signs something wasn’t right. A new NBC report shows that, as in 2016, minorities in particular are being targeted by disinformation efforts.

  • The IRA is just one example of disinformation around the election. Just this week, President Trump encouraged people to vote twice -- which is illegal. Twitter hid the tweets behind a public interest notice, while Facebook put a fact-checking label under the same post. A Mother Jones investigation, however, found that Russia is supporting Trump’s undermining of mail-in ballots with a disinformation campaign. There are also worries about possible misinformation emerging from an election night “Red Mirage,” where Trump is ahead due to in-person votes and claims victory despite mail-in ballots not having been counted.

  • Twitter has said it will display warning labels on misleading or doctored videos. The change comes after the social media company faced complaints for not limiting the spread of a deceptive clip of Joe Biden speaking. Separately, the platform will also be adding context to topics appearing in its ‘trending’ section. For the Verge, Casey Newton argues the policy doesn’t go far enough, and the ‘trending’ section should be taken down altogether.

  • Facebook has threatened to ban news content from its platform in Australia, as part of an ongoing struggle against a new regulation there. The new policy, which could go into effect within the next few months, would require Facebook and Google pay for news content posted to their sites. Regional publishers have said the ban would allow a dangerous spread of misinformation on the site. Back in May, the Columbia Journalism Review held a panel discussing whether Facebook and Google should be forced to pay for news content, and what the consequences of such a move would be.

  • A new study has found that warning participants about misinformation can help to preserve the integrity of their memories. While exposure to misinformation can alter people’s recollection of past events, the study, published in the Proceedings of the National Academy of Sciences (PNAS), found that warning about the threat of misinformation helped to preserve accurate memories. Another study, this one from Harvard’s Kennedy school, is lending credence to the idea that a lie can travel halfway around the world while the truth is still putting on its shoes, finding that it took a week for debunking tweets to match the quantity of misinformation tweets.