Edit 2024/Generative AI and Wikipedia: A discussion: 2024/Main Page

Jump to navigation Jump to search
You do not have permission to edit this page, for the following reason:

The action you have requested is limited to users in the group: Users.


Warning: This page already exists, but it does not use this form.

This submission has been accepted for WikiConference North America 2024.



Title:

Main Page

Type of session:

Round Table

Session theme(s):

Community Engagement, Community Health

Abstract:

Generative AI brings both threats and opportunities to Wikipedia. With its capability to transform large amounts of content and sources into new formats like text, audio, or visual summaries, generative AI has the potential to help address some of Wikipedia’s longstanding challenges for both editors and readers: for example, on the editing side, AI could be used to scale mentorship and education of new editors struggling to understand years of complex accreted policies and content discussions. For readers, it could help them overcome the challenge of navigating long, dense articles and topics by providing a more personally-tailored learning experience that takes into account things like specific area of interest, reading level, and learning style (audio, visual, etc.). But the speed and ease of generating new content combined with AI’s well-known hallucinations problem pose risks as well: good-faith editors may use AI to generate and add content that they haven’t independently verified with a reliable source, unintentionally creating misinformation. Bad actors may use AI to generate both malicious content and authentic-looking sources that purport to support it, spreading disinformation. And the rapidity with which the rest of the world (e.g., academic research) is adopting this technology calls into question whether what Wikipedians have traditionally considered to be reliable secondary sources need to be vetted more thoroughly or approached differently. Finally, on the reader side, the wide availability of commercial AI tools like ChatGPT or Google’s Search Generative Engine – all trained on Wikipedia content, but rarely or never crediting Wikipedia in their responses – may affect our ability to deliver high-quality information to the general public by severing the connection between the content and the community that creates, maintains, and evolves it.

The theme of this conference is “Crossroads”, and we as a community are at a crossroads with generative AI. In this roundtable, we’ll bring together a group of Wikimedians who’ve thought a lot about generative AI and the Wikimedia projects. We’ll discuss issues like:

  • How can generative AI help address some of Wikipedia’s longstanding challenges – e.g., filling content gaps, attracting and retaining contributors, and truly fulfilling our mission of delivering knowledge to every human in the world, regardless of factors like their education level or reading ability?
  • How are young people engaging with Wikipedia in the age of generative AI?
  • Are people still reading Wikipedia when they could instead ask ChatGPT a question? What can we as a community do to still be the world’s #1 source for information about any topic?
  • How might we begin discussions as a community about policies and practices that may need to be updated because of the impact of AI, e.g. Reliable Sources?
  • Our community of volunteer human contributors is our superpower: How can we strengthen our community in light of AI?
  • What does the future hold for Wikipedia and generative AI?

Author name(s):

Lead authors: Maryana Pinchuk, LiAnna Davis; Panelists: Andrew Lih, Bob Cummings, Ximena Gallardo, Carwil Bjork-James

Wikimedia username(s):

E-mail address:

mpinchuk@wikimedia.org, lianna@wikiedu.org

Affiliated organization(s):

Wikimedia Foundation, Wiki Education

Estimated length of session

45-60 minutes

Will you be presenting remotely?

Okay to livestream?

Livestreaming is okay

Previously presented?

Special requests:

We are open to others joining the roundtable!


Cancel