Difference between revisions of "Submissions:2019/The Good, the Bot, and the Ugly: Critical Media Literacy and Problematic Information in Wikipedia"

From WikiConference North America
Jump to navigation Jump to search
(Withdrawn)
 
(6 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
{{WCNA 2019 Session Submission
 
{{WCNA 2019 Session Submission
  +
|status=Withdrawn
 
|theme=Reliability of Information<br />+ Inclusion and Diversity<br />+ Tech & Tools<br />+ Harassment Civility & Safety<br />
 
|theme=Reliability of Information<br />+ Inclusion and Diversity<br />+ Tech & Tools<br />+ Harassment Civility & Safety<br />
 
|type=Presentation
 
|type=Presentation
|abstract=Although Wikipedia has been recently celebrated as having ‘largely avoided the ‘fake news’ problem’ by Wikipedia co-founder Jimmy Wales himself (Harrison 2019), the encyclopedia contends with other types of problematic information on a daily basis. These forms of misinformation and disinformation include vandalism, a common problem in the crowdsourced platform (Geiger and Ribes 2010; Tran and Christen 2015); systemic biases, those related to mis-representation of marginalized identities and topics (Bazely 2018; Gallert and van der Velden 2015; Glott et al. 2010); as well as conflict of interest (COI) editing, which further endangers the neutrality of editors and articles (Pinsker 2015). In discussing the encyclopedia’s capability to combat fake news, Jimmy Wales and others acknowledge Wikipedia’s unique functioning (Pinsker 2015). As a socially-driven platform with multiple processes and guidelines in place for information verification, Wikipedia has both the policies and the people to address problematic information. As Wikipedia’s reputation in academic and public spheres has improved, furthermore, these resources have become more recognizable and recognized (Jennings 2008; Kittur and Kraut 2008). Less well known is that ‘Wikipedia would be a shambles without bots’ (Nasaw 2012). Bots patrol editors’ contributions and alert administrators of potential trolls and vandals (Geiger 2011; Martin 2018). They also make significant contributions in the reduction of problematic information in the encyclopedia.
+
|abstract=Although Wikipedia has been recently celebrated as having ‘largely avoided the ‘fake news’ problem’ by Wikipedia co-founder Jimmy Wales himself (Harrison 2019), the encyclopedia contends with other types of '''problematic information''' on a daily basis. These forms of misinformation and disinformation include '''vandalism''', a common problem in the crowdsourced platform (Geiger and Ribes 2010; Tran and Christen 2015); '''systemic biases''', those related to mis-representation of marginalized identities and topics (Bazely 2018; Gallert and van der Velden 2015; Glott et al. 2010); as well as '''conflict of interest (COI) editing''', which further endangers the neutrality of editors and articles (Pinsker 2015). In discussing the encyclopedia’s capability to combat fake news, Jimmy Wales and others acknowledge Wikipedia’s unique functioning (Pinsker 2015). As a socially-driven platform with multiple processes and guidelines in place for information verification, Wikipedia has both the policies and the people to address problematic information. As Wikipedia’s reputation in academic and public spheres has improved, furthermore, these resources have become more recognizable and recognized (Jennings 2008; Kittur and Kraut 2008). Less well known is that ‘Wikipedia would be a shambles without bots’ (Nasaw 2012). Bots patrol editors’ contributions and alert administrators of potential trolls and vandals (Geiger 2011; Martin 2018). They also make significant contributions in the reduction of problematic information in the encyclopedia.
   
Wikipedia bots play a key role in addressing and reducing problematic information (misinformation and disinformation) on the encyclopedia. However, it is ultimately reductive to construe bots as merely having benign impacts. In order to understand bots and other algorithms as more than just tools, this presentation centers a postdigital theorization of Wikipedia bots as ‘agents’ that co-produce knowledge in conjunction with human editors and actors. Following this theorization, this talk highlights case studies of three specific bots on Wikipedia, including ClueBot NG, AAlertbot, and COIBot, each of which engages in some type of information validation in the encyclopedia. The activities involving these bots ultimately supports the argument that information validation processes in Wikipedia are complicated by their distribution across multiple human-computer relations and agencies. Despite the programming of these bots for combating problematic information, their efficacy is challenged by social, cultural, and technical issues related to misogyny, systemic bias, and conflict of interest.
+
Wikipedia bots play a key role in addressing and reducing '''problematic information (misinformation and disinformation)''' on the encyclopedia. However, it is ultimately reductive to construe bots as merely having benign impacts. In order to understand bots and other algorithms as more than just tools, this presentation centers a postdigital theorization of Wikipedia bots as ‘agents’ that co-produce knowledge in conjunction with human editors and actors. Following this theorization, this talk highlights case studies of three specific bots on Wikipedia, including ClueBot NG, AAlertbot, and COIBot, each of which engages in some type of information validation in the encyclopedia. The activities involving these bots ultimately supports the argument that information validation processes in Wikipedia are complicated by their distribution across multiple human-computer relations and agencies. Despite the programming of these bots for combating problematic information, their efficacy is challenged by social, cultural, and technical issues related to misogyny, systemic bias, and conflict of interest. This presentation concludes by envisioning possibilities for postdigital education that encourage a more critical and nuanced perspective towards the use of Wikipedia bots. Studying the function of Wikipedia bots makes space for '''extending educational models for critical media literacy'''. In the postdigital era of problematic information, students should be on the alert for how the human and the nonhuman, the digital and the nondigital, interfere and exert agency in Wikipedia’s complex and highly volatile processes of information validation.
   
 
Disclosure: This proposal is drawn from an article co-written by the submitter (with co-author Jialei Jiang) and published in the academic journal ''Postdigital Science and Education'', available at https://rdcu.be/bQumO
A postdigital examination of these bots demonstrates Wikipedia as a sociotechnical system that distributes the monitoring of content and protects against problematic information through human-computer relations, agencies, and moralities. Such an examination will look more closely at the ways bots accomplish this work in concert with a complex ecology of human and non-human actors, shared social values, policies, and procedures. This presentation argues that bots should be recognized as having the capacity to reshape (and misshape) information as well as the social system in which information flows. More than just tools, Wikipedia bots are both sociotechnical ‘agents’ engaging with hybrid, compositional work (Geiger 2011; Kennedy 2016, 2010) and extensions of the communities and creators in which they operate. While Wikipedia bots are used to combat fake news and problematic information, they also, at times, ‘fight’ against each other and revert each other’s edits (Geiger and Halfaker 2017; Halfaker and Riedl 2012; Tsvetkova et al. 2017). To further complicate the issue, bots can be ‘massively disruptive’ (Halfaker and Riedl 2012: 81) to the Wikipedia community if they perform tasks inappropriately, which arise by and large from disagreements between bots and human editors, as well as technical problems, limitations, or ‘bugs’ in a bot’s programming. Acknowledging and identifying the multiple ways in which bots both extend and reject socially-mediated policies and procedures of the Wikipedia community provides a significant opportunity for critical media literacy in the postdigital era. While its reputation is certainly improving, Wikipedia continues to be mistrusted by students (Boyd 2014) and discounted by (most of) academia. The complex methods through which information is processed, evaluated, authorized, and rejected in Wikipedia by various and multiple agents demonstrate the necessity of new models for critical media literacy. This presentation concludes by envisioning possibilities for postdigital education that encourage a more critical and nuanced perspective towards the use of Wikipedia bots. Studying the function of Wikipedia bots makes space for extending educational models for critical media literacy. In the postdigital era of problematic information, students should be on the alert for how the human and the nonhuman, the digital and the nondigital, interfere and exert agency in Wikipedia’s complex and highly volatile processes of information validation.
 
 
Disclosure: This proposal is drawn from an article co-authored by the submitter, available at https://rdcu.be/bQumO
 
 
|academic=Yes
 
|academic=Yes
 
|author=Matthew A. Vetter
 
|author=Matthew A. Vetter
Line 14: Line 13:
 
|username=Matthewvetter
 
|username=Matthewvetter
 
|affiliates=Indiana University of Pennsylvania
 
|affiliates=Indiana University of Pennsylvania
|time=30-45 mins
+
|time=25-30 mins
  +
|size=whatever the committee feels appropriate
|size=20-25 seats
 
|requests=Mac adaptor, Projector and screen
+
|requests=
|presented=No, but I have published on this topic,see: https://rdcu.be/bQumO
+
|presented=No, but I have published on this topic, see: https://rdcu.be/bQumO
 
|present-other=Yes
 
|present-other=Yes
 
}}
 
}}

Latest revision as of 08:30, 11 October 2019

This submission has been withdrawn by its author.



Title:

The Good, the Bot, and the Ugly: Critical Media Literacy and Problematic Information in Wikipedia

Theme:

Reliability of Information
+ Inclusion and Diversity
+ Tech & Tools
+ Harassment Civility & Safety

Type of session:

Presentation

Abstract:

Although Wikipedia has been recently celebrated as having ‘largely avoided the ‘fake news’ problem’ by Wikipedia co-founder Jimmy Wales himself (Harrison 2019), the encyclopedia contends with other types of problematic information on a daily basis. These forms of misinformation and disinformation include vandalism, a common problem in the crowdsourced platform (Geiger and Ribes 2010; Tran and Christen 2015); systemic biases, those related to mis-representation of marginalized identities and topics (Bazely 2018; Gallert and van der Velden 2015; Glott et al. 2010); as well as conflict of interest (COI) editing, which further endangers the neutrality of editors and articles (Pinsker 2015). In discussing the encyclopedia’s capability to combat fake news, Jimmy Wales and others acknowledge Wikipedia’s unique functioning (Pinsker 2015). As a socially-driven platform with multiple processes and guidelines in place for information verification, Wikipedia has both the policies and the people to address problematic information. As Wikipedia’s reputation in academic and public spheres has improved, furthermore, these resources have become more recognizable and recognized (Jennings 2008; Kittur and Kraut 2008). Less well known is that ‘Wikipedia would be a shambles without bots’ (Nasaw 2012). Bots patrol editors’ contributions and alert administrators of potential trolls and vandals (Geiger 2011; Martin 2018). They also make significant contributions in the reduction of problematic information in the encyclopedia.

Wikipedia bots play a key role in addressing and reducing problematic information (misinformation and disinformation) on the encyclopedia. However, it is ultimately reductive to construe bots as merely having benign impacts. In order to understand bots and other algorithms as more than just tools, this presentation centers a postdigital theorization of Wikipedia bots as ‘agents’ that co-produce knowledge in conjunction with human editors and actors. Following this theorization, this talk highlights case studies of three specific bots on Wikipedia, including ClueBot NG, AAlertbot, and COIBot, each of which engages in some type of information validation in the encyclopedia. The activities involving these bots ultimately supports the argument that information validation processes in Wikipedia are complicated by their distribution across multiple human-computer relations and agencies. Despite the programming of these bots for combating problematic information, their efficacy is challenged by social, cultural, and technical issues related to misogyny, systemic bias, and conflict of interest. This presentation concludes by envisioning possibilities for postdigital education that encourage a more critical and nuanced perspective towards the use of Wikipedia bots. Studying the function of Wikipedia bots makes space for extending educational models for critical media literacy. In the postdigital era of problematic information, students should be on the alert for how the human and the nonhuman, the digital and the nondigital, interfere and exert agency in Wikipedia’s complex and highly volatile processes of information validation.

Disclosure: This proposal is drawn from an article co-written by the submitter (with co-author Jialei Jiang) and published in the academic journal Postdigital Science and Education, available at https://rdcu.be/bQumO

Academic Peer Review option:

Yes

Author name:

Matthew A. Vetter

E-mail address:

mvetter@iup.edu

Wikimedia username:

Matthewvetter

Affiliated organization(s):

Indiana University of Pennsylvania

Estimated time:

25-30 mins

Preferred room size:

whatever the committee feels appropriate

Special requests:

Have you presented on this topic previously? If yes, where/when?:

No, but I have published on this topic, see: https://rdcu.be/bQumO

If your submission is not accepted, would you be open to presenting your topic in another part of the program? (e.g. lightning talk or unconference session)

Yes