NEWS

The Digital Deception Glossary

Margaret Sessa-Hawkins and Hamsini Sridharan | August 23, 2019

Online political manipulation has increasingly become of a feature of political life in the U.S. and around the world. As information about online political manipulation becomes more common though, an entire new lexicon has emerged talk about the phenomenon. Here, we’ve collected and defined some of the terms most commonly used in reference to online political manipulation. So if you find yourself having trouble telling your botnets from your troll farms, never fear: MapLight’s digital deception glossary is here.

Digital Deception

In politics, “digital deception” refers to the dissemination of deliberately misleading information online. Digital deception can be propagated in a number of ways: websites can create false stories purporting to be news; ads from undisclosed sources can be microtargeted to individuals; conspiracy theories can be created and circulated on message boards; or social media hashtags can be hijacked by false accounts. When false information is propagated deliberately, it is known as digital disinformation; when it’s unintentional, it’s called misinformation.

Microtargeting

One way digital deception spreads is through microtargeting. Microtargeting occurs when an organization or person uses online data (which can be obtained from social media platforms, third-party data brokers, or even voter files) to target messaging to very narrow segments of the public. Individuals can be targeted based on demographic characteristics such as age, race, and gender, as well as religion, income, marital status, hobbies, political views, social media habits, and propensity to vote. Being able to target messages so specifically leads to social media dark posts: posts that are only visible to the messenger and their target audience. This allows advertisers to show manipulative, contradictory, and discriminatory messages to different groups, while also making it nearly impossible to refute these messages or hold them accountable.

Dark ads are especially problematic in their targeting of minorities. The Trump campaign, for example, admitted to targeting black voters with dark ads in 2016 in an attempt to dampen turnout for Hillary Clinton. While Facebook, Twitter, and Google have all created archives of paid political ads, these “transparency” centers are incomplete and buggy and do not give users a clear idea of a particular advertisement’s targeting—meaning that microtargeting is still nearly impossible to track in any systematic way. 

Computational Propaganda

“Computational propaganda,” a term coined by researchers Samuel Woolley and Philip N. Howard, refers to a set of human-curated and highly automated tactics to disseminate and amplify misleading political agendas. This can be done using algorithms or by human curation and includes political disinformation spread by bots, botnets, troll farms, and sockpuppets (see below).

Bot

One of the ways disinformation can be amplified is through bots. The term “bot” refers to different types of automated social media accounts. While some such accounts are fully automated, others are only partly automated and partly operated by real people. These types of accounts are generally referred to as sockpuppets. Political bots are specifically automated to interact with other user accounts that are focused mostly on politics.

Worth noting: While bots are often associated with amplifying false, misleading, or divisive messages on social media, they can also be used for social good, such as @UnitedStatesV, which tweets about lawsuits filed by the government and @Probabot_ which tweets about whether specific accounts are likely to be bots.

Botnet

A botnet is a network of automated accounts that follow each other, mimicking “real,” organic social media interactions. Networks of bots can be used to amplify specific narratives in order to make them appear grassroots, a process known as digital astroturfing. They can be used to flood social media, gaming popularity-based algorithms to draw attention to or distract from political events. They can also be used to hijack hashtags used by real people in order to disrupt their activity.

Troll Farm

On social media, a troll is someone who deliberately posts provocative comments or attacks other users to provoke a reaction. When many such users coordinate to influence public opinion—often working for a company and selling their services—they are called a troll farm. Perhaps the most famous example is the Internet Research Agency, based in St. Petersburg, Russia. Prior to the 2016 election, the IRA created social media accounts posing as Americans in order to influence American public opinion in a way that was beneficial for Russia.

Deepfake

A portmanteau of “deep learning” and “fake,” deepfakes are videos generated by machine learning techniques to make it look like individuals have done or said things they really didn’t. Mainstream attention first turned to deepfakes when Vice reported  on faked porn videos posted on Reddit. Since then, deepfakes have been utilized for fun—to depict Barack Obama mouthing the words off audio tracks—and to make a point— such as when a video was posted to Facebook that made it seem like Mark Zuckerberg was making an autocratic speech to test whether Facebook would leave the post up. Deepfakes are likely to be a bigger problem in the future, as AI and machine learning techniques become more sophisticated; for now, the practice of editing already existing video to create misleading content (the most prominent example being a recent video of Nancy Pelosi artificially slowed to make her appear drunk), known as “cheapfakes” or “shallowfakes,” is more frequently used in digital deception.

Dark money

In the United States, certain types of organizations—most often, political nonprofits and LLCs—do not have to disclose the names of their donors. Under campaign finance law, “dark money” groups are not allowed to give money directly to candidates. However, they can accept donations of any size and spend as much as they want on independent expenditures supporting or attacking candidates—while still keeping their donors’ names secret. Groups can spend millions on advertisements without viewers ever knowing who paid for these ads. The existence of dark money adds to the general opacity of digital disinformation, further obscuring the original sources of misleading political messaging.