Difference between revisions of "Submissions:2018/Fighting Misinformation and Fake News on Wikipedia"
SuperHamster (talk | contribs) m (→top: Waitlisted - stay tuned for an email with more information.) |
SuperHamster (talk | contribs) (Updating status) |
||
Line 1: | Line 1: | ||
− | {{2018/ |
+ | {{2018/Declined}} |
<!-- Please provide information about your submission below and save the page. --> |
<!-- Please provide information about your submission below and save the page. --> |
Latest revision as of 06:51, 20 September 2018
Due to limited space, WikiConference North America 2018 unfortunately could not accommodate this submission in its program this year.
Please check out our Unconference for opportunities to present and share there.
- Title
Fighting Misinformation and Fake News on Wikipedia
- Theme (optional)
- Harassment, Civility, & Safety
- Type of submission
- Panel
- Author
- Gleb Tsipursky
- E-mail address
- gleb@intentionalinsights.org
- Wikimedia username
- Gleb_Tsipursky
- Affiliation(s) (optional)
- Pro-Truth Pledge
- Abstract
Brief description The scourge of misinformation and fake news has become a catastrophic problem undermining online discourse, including on Wikipedia. Some recent behavioral science research has shown why people believe misinformation and share it, and how we can prevent them from doing so, and that is the focus of this panel, composed of scholars and activists focused on fighting misinformation on Wikipedia and elsewhere.
Long description Few would dispute that many have lied to achieve their political agendas in the past. However, the recent and current political climate will arguably be recalled by future generations as the time of “fake news,” “alternative facts,” and an explosion of viral untruths on the ever-expanding world-wide social network. Recent political events, such as the tactics used by Donald Trump’s campaign during the 2016 U.S. presidential election, and lies spread in the “Vote Leave” campaign in the U.K. Brexit referendum, have resulted in the venerable Oxford Dictionary choosing as the 2016 word of the year post-truth, “circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief” (Oxford Dictionary, 2016, n.p.).
The pervasiveness of social media has had an exponential impact on spreading non-truths. According to the Pew Research Center, seven out of ten Americans use social media. Twitter reported 330 million users by the end of 2017 (Statista, 2017). Facebook is the most popular, with 68 percent of U.S. adults using it, and 74 percent of users logging in daily (Pew Research Center, 2018). According to Baum (1994), a large portion of conditioned reinforcers and punishers are social in nature. Thus, social media provides a virtual social community within which behavior may be reinforced or punished by those in one’s social network. Arguably, this social network is often more extensive than one’s “real life” social network in terms of numbers, in terms of access to each other’s lives. Facebook users have more access to the verbal behavior of their Facebook friends than their real life friends, who may go months or years without communicating in person.
The impact of sharing misinformation is vast. Sixty-two percent of U.S. adults contact news on social media (Gottfried & Shearer, 2016). A poll by Ipsos, conducted in late November and early December 2016, showed that American adults are prone to be deceived by fake news headlines. Surveying 3,015 adults, participants were shown six election-related headlines, three fake and three true, and asked if they had previously read the headlines. In the case that they had read the headline, the respondents were asked to rate the headline as “very accurate,” “somewhat accurate,” “not very accurate,” or “not at all accurate.” Of those who had read the fake election-related headlines, approximately 75 percent rated the headlines “very accurate” or “somewhat accurate (Silverman & Singer-Vine, 2016).
A study that compared true and fake election-related news stories on Facebook by the number of engagements – reactions, comments, and shares – showed that in the three months before the U.S. presidential election, the top 20 fake election-related news stories received more engagements than the top 20 real news stories, 8,711,000 compared to 7,367,000 (Silverman, 2016). Another study, which looked at a wider number of fake news stories, showed that in the same period of three months before the 2016 U.S. Presidential election, 156 misleading news stories got just under 38 million shares on Facebook (Allcott & Gentzkow, 2017). Note that the researchers in this study only examined shares on Facebook rather than engagements. The latter number would have been much higher.
Notably, the Washington Post recently reported on an unpublished study out of Ohio State University that found that fake news likely had a significant impact on the 2016 U.S. presidential election. The study found that around 4 percent of voters who supported Barack Obama for president in 2012 were influenced to not vote for Clinton on the basis of fake news stories. This includes 20 percent of Obama supporters who believed that Clinton approved weapons sales to Islamic jihadists, which is a news story that has not been verified by any fact-checkers. Of the 25 percent of Obama supporters who believed at least one fake story about Clinton, 45 percent did not vote for her. Of the 75 percent who did not believe any of the news stories, 89 percent voted for Clinton. A regression analysis indicated that of a variety of factors that may influence an Obama voter to not vote for Clinton, belief in fake news accounted for 11 percent of the variance, potentially affecting the outcome of the election (Blake, 2018).
Fake news comes from a variety of sources, but according to U.S. intelligence agencies, a major portion has originated from Russia’s efforts to use digital propaganda to influence the U.S. election. Recent U.S. congressional investigations shed light on Russia’s successful efforts (Kwong, 2017). Additionally, political partisans for either side create massive amounts of fake news (Green & Issenberg, 2016), as do people and entities attempting to benefit financially from spreading fake news (Subramanian, 2017).
Of course, the United States is far from unique in the impact of fake news. The United Kingdom was another target of Russian’s digital propaganda effort, with researchers finding many hundreds of accounts operated by the Russian Internet Research Agency for the purpose of spreading fake news to influence U.K. politics (Booth, Weaver, Hern, & Walker, 2017). Russia-owned accounts spread misinformation in Spain to incite the Catalonian independence movement (Palmer, 2017), and used misinformation to try to influence the 2017 German elections (Shuster, 2017). The 2017 French elections also drew a great deal of fake news, with a substantial amount coming from Russian-backed accounts (Farand, 2017). Those outside the United States are similarly susceptible to believing fake news when exposed to it. For example, a research study on misinformation in the 2017 French election found that exposing voting-age French people to deceptive election-related statements resulted in the study subjects believing presidential candidate Marine Le Pen’s falsehoods.
The specific impact of candidates on their supporters sharing false information may be explained in part by research on emotional contagion, which shows that followers tend to emulate their leaders (Hatfield, Cacioppo, & Rapson 1993).
Although our society as a whole suffers when deception is rampant in the public sphere, individuals who practice deceptive behaviors often do so to support or enhance their own agendas. This situation is reminiscent of a “tragedy of the commons,” as described in Hardin’s seminal article in Science (Hardin, 1968). Hardin demonstrated that among groups of people who share a common resource without any outside controls, each individual may benefit in taking more of the common resource than is his or her fair share, leading to individual gain at great cost to the community as a whole. Solving tragedies of the commons requires “mutual coercion, mutually agreed upon by the majority of the people affected” (Hardin, 1968, p. 1247) so as to prevent these harmful outcomes where a few gain at the cost of everyone else. In other words, contingencies must be arranged such that taking more than your fair share results in aversive consequences that outweigh reinforcers.
A societal example of a tragedy of the commons is environmental pollution (Vogler, 2000). Society as a whole benefits from clean air and water, yet individual polluters may gain more – at least in the short and medium term – from polluting (Hanley & Folmer, 1998). The sustainability movement presents many examples of successful efforts to address the tragedy of the commons (Ostrom, 2015). As predicted by Hardin, only substantial disincentives for polluting outweigh the benefits (Fang-yuan, 2007). Particularly illuminating is a theoretical piece describing the application of psychology research to the tragedy of the commons in the environment. In addition to coercion by an external party such as the government, the commons can be maintained by a combination of providing credible information on what helps and hurts the environment, appealing to people’s identities, setting up new or changing existing institutions, and shifting the incentives for participants (Van Vugt, 2009).
Research on successful strategies used by the environmental sustainability movement fits well with work on libertarian paternalism and choice architecture. Libertarian paternalism is a term that comes from behavioral economics that refers to an approach by private and public institutions to influence human behavior for social good while also respecting individual choice (Sunstein & Thaler, 2003a; Sunstein & Thaler, 2003b; Thaler & Sunstein, 2008). Choice architecture is the method used by libertarian paternalists to design the way choices are presented to consumers to increase the probability of choosing the option that with the greatest personal and social benefit, such as healthy food choices and registering for organ donation. Methods in choice architecture include setting up default options, anticipating errors, giving clear feedback, and creating appropriate incentives (Johnson et al. 2012; Jolls, Sunstein, & Thaler 1998; Selinger & Whyte, 2011; Thaler et al., 2014).
With respect to the spread of fake news, a parallel may be drawn. While society as a whole is hurt by viral deception, there are some that benefit by the spread of lies, whether by deliberately lying to push their agenda or to align with a social group, by conserving response effort by sharing without checking, or by dismissing fact-checking and spreading misinformation that supports one’s own views. Recent research by prominent scholars discussed the need for a variety of safeguards to protect our global society from fake news (Lazer et. al. 2018). Another article suggested that any effort to address the situation “must involve technological solutions incorporating psychological principles, an interdisciplinary approach that we describe as ‘technocognition’” (Lewandowsky, Ecker, & Cook, 2017, p. 353).
To consider interventions to increase truthful behavior, one must first examine the conditions under which one behaves untruthfully. For Skinner, a lie is a response emitted under circumstances that would otherwise control an incompatible response (Skinner, 1957). Skinner (1974) continues: The truth of a statement of fact is limited by the sources of the behavior of the speaker, the control exerted by the current setting, the effects of similar settings in the past, the effects upon the listener leading to precision or to exaggeration or falsification, and so on. (p. 150)
Most otherwise honest adults would likely admit to lying in a variety of contexts to contact positive consequences, avoid negative consequences, or both. One may lie about their schedule to avoid dinner plans with an annoying person to avoid an unpleasant situation. Upon receipt of a gift, the receiver may falsely express great appreciation to salvage a friendly moment from which they derive reinforcement. These examples may be better described as untruthful verbal behavior under the control of competing contingencies. One contingency supports truthful behavior, likely under the control of a history of socially-mediated positive consequences for truthful behavior and negative consequences for untruthful behavior. The other contingency supports kind behavior, also likely under the control of a history of socially-mediated positive consequences for kind behavior and negative consequences for unkind behavior.
Skinner (1971) presents an analysis of behavior that is for the good of the individual, for the good of others, or for the good of the group or culture. In Skinner’s recommendations for designing a culture, he suggests that cultures should arrange contingencies to support individual behavior that benefits others, and the group (Skinner, 1971). Otherwise, the tendency of individuals is to behave in such a way to benefit themselves at the expense of the group. This suggestion mirrors the tragedy of the commons, in which individuals must choose between behaviors that benefit themselves or the group. This may be seen in the sharing of fake news to benefit one’s agenda at the expense of the opposing agenda, or the general population.
Another relevant factor may be an individual’s values. Skinner (1953) discussed values in terms of assigning labels of “good” and “bad,” and describing what one “ought” to do. Describing a stimulus as good or bad simply describes the reinforcing effects of that stimulus. Similarly, “ought” describes behaviors that are likely to be reinforced. From a Relational Frame Theory (RFT; Hayes, Barnes-Holmes, & Roche, 2001) perspective, values are verbal behavior that transform the psychological function of objects and events. Essentially, when one values honesty, the function of stimuli (e.g. verbal behavior, people) relating to honesty have a reinforcing function, and stimuli relating to dishonesty have a punishing function. Values also function as rule-governed behavior, specifying consequences for behavior in relation to the rule (value). “Valuing honesty” predicts honest behavior.
Using all of this research basis, the panelists will discuss how to use behavioral science to combat misinformation and fake news, on Wikipedia and beyond.
- Length of presentation
- 30-45 minutes
- Special requests
- None
- Preferred room size
- 100
- Have you presented on this topic previously? If yes, where/when?
- Yes, at MisinfoCon DC and the BridgeUSA National Convention
- If you will be incorporating a slidedeck during your presentation, do you agree to upload it to Commons before your session, with a CC-BY-SA 4.0 license, including suitable attribution in the slidedeck for any images used?
- Yes
- Will you attend WikiConference North America if your submission is not accepted?
- No
Interested attendees
If you are interested in attending this session, please sign with your username below. This will help reviewers to decide which sessions are of high interest. Sign with four tildes. (~~~~).
- Add your username here.