Edit 2019/Surfacing Trust, Expertise, and Provenance for better News Sharing: 2024/Main Page

Jump to navigation Jump to search
You do not have permission to edit this page, for the following reason:

The action you have requested is limited to users in the group: Users.


Warning: This page already exists, but it does not use this form.

This submission has been accepted for WikiConference North America 2019.



Title:

Main Page

Theme:

Reliability of Information

Type of session:

Presentation

Abstract:

Assessing accuracy of news in an era when misinformation can spread in matters of a few clicks has become an increasingly growing concern. News consumers are often exposed to content for which they might not have enough domain knowledge or expertise. Moreover, the circulation of news through social media has made it more likely for users to be exposed to news sources with which they are not acquainted and therefore, whose credibility record may be unknown to them. These issues make it more difficult for news consumers to assess the validity of news content relying only on their own judgment.

Several approaches have been introduced for tackling the problem of information pollution, from crowdsourcing annotations to training machine learning models that label misinformation. We are designing and building an experimental platform (Trustnet -- http://trustnet.csail.mit.edu) that empowers us to explore a part of the solution space that has not been examined yet. This platform is essentially a news sharing/reader system with an embedded social network. In addition to accommodating the traditional “follow” relationships, this platform enables users to explicitly mark other sources (users or news publishing entities) as trustworthy. Trust relationships are kept private and asymmetric - Alice trusting Bob does not ensue that Bob trusts Alice. Users can leverage their trust network to filter their newsfeed, for example by choosing to view only those articles that have been verified or refuted by their trusted sources, or maybe even those on which their trusted sources have disagreement. Trustnet extends the concept of trusted sources to build a trust network for each user. When a user leaves a credibility assessment on an article, it propagates that information to all the users that either immediately trust or have an indirect trust path to the assessor, while maintaining the assessor’s anonymity.

Trustnet therefore, effectively crowdsources credibility assessment of news but leaves it to each user to decide whose judgment they trust, and in doing so makes fact-checking appealing to a broader audience. In addition, since credibility information is offered alongside each article on the same platform, users can readily receive this information without actively seeking them in external sources. To afford structured assessments, we are conducting user studies on how people reason about the truthfulness of news articles and how to capture those reasons in the interface. This categorization of rationales can help users assign different weights to different reasons provided by their trusted network, in filtering for accurate or inaccurate news. Furthermore, they can potentially help those who post credibility evaluations by fostering critical thinking and media literacy.

Academic Peer Review option:

No

Author name:

Farnaz Jahanbakhsh

E-mail address:

farnazj@mit.edu

Wikimedia username:

Affiliated organization(s):

Massachusetts Institute of Technology

Estimated time:

20-30 minutes

Preferred room size:

Special requests:

Have you presented on this topic previously? If yes, where/when?:

No

If your submission is not accepted, would you be open to presenting your topic in another part of the program? (e.g. lightning talk or unconference session)




Cancel