Difference between revisions of "Submissions:2023/Introduction to Credibility Bot"

From WikiConference North America
Jump to navigation Jump to search
m (SuperHamster moved page User:Submissions:2023/Introduction to Credibility Bot to Submissions:2023/Introduction to Credibility Bot without leaving a redirect: Namespace fix)
(link to discussion and workshop following this talk)
 
Line 2: Line 2:
 
|theme=Credibility / Mis and Disinformation (WikiCred)
 
|theme=Credibility / Mis and Disinformation (WikiCred)
 
|type=Lightning talk
 
|type=Lightning talk
|abstract= Wikipedia has guidelines intended to ensure articles are supported by reliable sources. How are these guidelines implemented in practice? Wikipedia editors currently try to review articles for quality issues such as inaccurate information or bias; however, since Wikipedia is a large online volunteer community, practices for discussing and maintaining quality are uneven, impromptu, and difficult to scale. Wikipedia editors struggle to get the signals they need to collectively discuss or prioritize their work improving articles. Even when volunteer editors do have the capability to assess sources in one topic area, they often lack the tools to do so across other topics, and ultimately in other languages of Wikipedia.Credibility Bot aims to build an easily-installed credibility toolkit for Wikipedia editors. Working closely with committed WikiProjects, it will bring information about sources together with actionable reports and alerts. Credibility Bot can enable the efficient, ongoing evaluation of quality beyond any single WikiProject or subject domain. Credibility Bot data can be archived as a citable, timestamped snapshot, and made globally available for others to use in openly accessible platforms like Wikidata, the credibility toolkit can have a scalable impact for the problem of source quality across the entire Wikipedia ecosystem
+
|abstract= Wikipedia has guidelines intended to ensure articles are supported by reliable sources. How are these guidelines implemented in practice? Wikipedia editors currently try to review articles for quality issues such as inaccurate information or bias; however, since Wikipedia is a large online volunteer community, practices for discussing and maintaining quality are uneven, impromptu, and difficult to scale. Wikipedia editors struggle to get the signals they need to collectively discuss or prioritize their work improving articles. Even when volunteer editors do have the capability to assess sources in one topic area, they often lack the tools to do so across other topics, and ultimately in other languages of Wikipedia.Credibility Bot aims to build an easily-installed credibility toolkit for Wikipedia editors. Working closely with committed WikiProjects, it will bring information about sources together with actionable reports and alerts. Credibility Bot can enable the efficient, ongoing evaluation of quality beyond any single WikiProject or subject domain. Credibility Bot data can be archived as a citable, timestamped snapshot, and made globally available for others to use in openly accessible platforms like Wikidata, the credibility toolkit can have a scalable impact for the problem of source quality across the entire Wikipedia ecosystem.
  +
  +
Discussion/workshop following this talk is at [[Submissions:2023/Credibility_Bot_Feedback_and_Brainstorming]]
  +
 
|email=jake{{@}}hackshackers.com
 
|email=jake{{@}}hackshackers.com
 
|author=Jake Orlowitz, WikiBlueprint (consulting)
 
|author=Jake Orlowitz, WikiBlueprint (consulting)

Latest revision as of 17:17, 11 November 2023

This submission has been accepted for WikiConference North America 2023.



Title:

Introduction to Credibility Bot

Theme:

Credibility / Mis and Disinformation (WikiCred)

Type of session:

Lightning talk

Abstract:

Wikipedia has guidelines intended to ensure articles are supported by reliable sources. How are these guidelines implemented in practice? Wikipedia editors currently try to review articles for quality issues such as inaccurate information or bias; however, since Wikipedia is a large online volunteer community, practices for discussing and maintaining quality are uneven, impromptu, and difficult to scale. Wikipedia editors struggle to get the signals they need to collectively discuss or prioritize their work improving articles. Even when volunteer editors do have the capability to assess sources in one topic area, they often lack the tools to do so across other topics, and ultimately in other languages of Wikipedia.Credibility Bot aims to build an easily-installed credibility toolkit for Wikipedia editors. Working closely with committed WikiProjects, it will bring information about sources together with actionable reports and alerts. Credibility Bot can enable the efficient, ongoing evaluation of quality beyond any single WikiProject or subject domain. Credibility Bot data can be archived as a citable, timestamped snapshot, and made globally available for others to use in openly accessible platforms like Wikidata, the credibility toolkit can have a scalable impact for the problem of source quality across the entire Wikipedia ecosystem.

Discussion/workshop following this talk is at Submissions:2023/Credibility_Bot_Feedback_and_Brainstorming

Author name:

Jake Orlowitz, WikiBlueprint (consulting)

E-mail address:

jake@hackshackers.com

Wikimedia username:

User:Ocaasi

Affiliated organization(s):

Hacks/Hackers, NewsQ, WikiCred

Estimated time:

1 hour

Special requests:

Have you presented on this topic previously? If yes, where/when?:

No

Okay to livestream?

Do not livestream

If your submission is not accepted, would you be open to presenting your topic in another part of the program? (e.g. lightning talk or unconference session)

Yes