https://wikiconference.org/api.php?action=feedcontributions&user=Hearvox&feedformat=atomWikiConference North America - User contributions [en]2024-03-28T23:06:20ZUser contributionsMediaWiki 1.35.13https://wikiconference.org/index.php?title=User:Hearvox&diff=16186User:Hearvox2020-06-19T17:01:37Z<p>Hearvox: Add link to Iffy.news WikiCred project</p>
<hr />
<div>I am @hearvox: Barrett Golding, a radio and web producer from Bozeman MT USA:<br />
* Fellow at the [https://en.wikipedia.org/wiki/Missouri_School_of_Journalism#Reynolds_Journalism_Institute Reynolds Journalism Institute].<br />
* Executive Producer of [https://hearingvoices.org/ Hearing Voices] public radio series ([https://en.wikipedia.org/wiki/NPR NPR]-distributed and [https://en.wikipedia.org/wiki/Peabody_Award Peabody Award]-winning).<br />
* [https://en.wikipedia.org/wiki/United_States_Artists United States Artists]: [https://en.wikipedia.org/wiki/List_of_United_States_Artists_(USA)_Fellowship_recipients#2010 Fellowship recipient].<br />
* Podcast producer/consultant for [https://en.wikipedia.org/wiki/Human_Rights_Watch Human Rights Watch] and the Southern Poverty Law Center's [https://en.wikipedia.org/wiki/Southern_Poverty_Law_Center#Tolerance.org Teaching Tolerance].<br />
* A.k.a., The Wandering Jew on-air (volunteer DJ at [https://en.wikipedia.org/wiki/KGLT-FM KGLT-FM]).<br />
My WikiCred project is: [[2019/Grants/Iffy.news|https://wikiconference.org/wiki/2019/Grants/Iffy.news]]<br />
<br />
https://en.wikipedia.org/wiki/User:Hearvox</div>Hearvoxhttps://wikiconference.org/index.php?title=Talk:2019/Grants/Iffy.news&diff=16109Talk:2019/Grants/Iffy.news2020-04-24T18:52:36Z<p>Hearvox: /* Reply to Sj */</p>
<hr />
<div>Hello Barrett, thanks for this proposal. Could you say a bit more about what you have in mind? <br />
<br />
Some specific questions: <br />
* How are you thinking of evaluating sources based on their article? What aspects of articles would you look at, in which languages? <br />
* How much of the composite credibility score is built now? Is there a demo? <br />
* Have you experimented w/ doing data analysis based on the WP API before? <br />
* How would you make the composite credibility assessments available -- do you have a schema / API in mind for that work?<br />
<br />
Warmly, [[User:Sj|Sj]] ([[User talk:Sj|talk]]) 23:39, 22 April 2020 (UTC)<br />
<br />
=== Reply to [[User:Sj|Sj]] ===<br />
''Q: Language?''<br><br />
''A:'' To start, Iffy.news will be US only. The language of suspect sites is English. The control group of US daily newspapers are mostly English, with a few in Spanish, Arabic, and several Asian languages. My index of those dailies is at [https://news.pubmedia.us/ NewsNetrics]<br />
<br />
''Q: Evaluating sources based on their article?''<br><br />
''A:'' The evals aren't of articles but of the credibility of the news publisher: the domain name, based on evals by trained reviewers (Media Bias/Fact Check, NewsGuard, etc.). My raw data for those unreliable sites is now in a spreadsheet: [https://docs.google.com/spreadsheets/d/1ck1_FZC-97uDLIlvRJDTrGqBk0FuDe9yHkluROgpGS8/edit#gid=707857677?usp=sharing Iffy 2020-04].<br />
<br />
''Q: How much of the composite credibility score is built ''<br><br />
''A:'' None. Need first to determine the most accurate signals for distinguishing fake-news from fact-based news sites, that can be API-pulled (i.e., year online, Wikipedia article info). The hypothesis is outlined in the repo: [https://github.com/hearvox/unreliable-news/blob/master/topics/credscore.md CredScore]. (Once we know the most accurate signals, we may throw some AI at it to get the best weight for each signal.)<br />
<br />
''Q: Experimented w/ data analysis based on the WP API?''<br><br />
''A:'' I've experimented with the Wikipedia API to confirm that using the news-site name (encoded) as search criteria often returns its article (if one exists), from which I can pull its infobox, from which I can often machine-distinguish fake from fact publications. For instance, see the infoboxes at:<br><br />
https://en.wikipedia.org/wiki/Enid_News_%26_Eagle<br><br />
https://en.wikipedia.org/wiki/NewsPunch<br />
<br />
For related projects I have scripts that regularly and programmatically pull, analyze, and present data from APIs, from sources like Alexa Web Information Service, BuiltWith, Google PageSpeed, Internet Archive, and WebpageTest.org (see my overview: [https://github.com/hearvox/unreliable-news/blob/master/ref/apis-for-fact-checking.md APIs and Tools for fact-checking]). I'll adapt these scripts to the Wikipedia API. <br />
<br />
''Q: How would you make the composite credibility assessments available?''<br><br />
''A:'' Step one would be presenting the data at the site and sharing the raw data via the site, which runs on WordPress CMS, so can use its built-n REST API to output JSON. The data will also auto-import in a public Google spreadsheet, for folk who prefer that format. (I've configured sites and sheets to share similar data in other projects.)<br />
<br />
If CredScore proves effective at auto-detecting fake news, with a high level of certainty. The next step might be browser extensions and advertising blacklists. Adaptable scripts for both already exist.</div>Hearvoxhttps://wikiconference.org/index.php?title=Talk:2019/Grants/Iffy.news&diff=16108Talk:2019/Grants/Iffy.news2020-04-24T18:50:30Z<p>Hearvox: /* Reply to Sj */</p>
<hr />
<div>Hello Barrett, thanks for this proposal. Could you say a bit more about what you have in mind? <br />
<br />
Some specific questions: <br />
* How are you thinking of evaluating sources based on their article? What aspects of articles would you look at, in which languages? <br />
* How much of the composite credibility score is built now? Is there a demo? <br />
* Have you experimented w/ doing data analysis based on the WP API before? <br />
* How would you make the composite credibility assessments available -- do you have a schema / API in mind for that work?<br />
<br />
Warmly, [[User:Sj|Sj]] ([[User talk:Sj|talk]]) 23:39, 22 April 2020 (UTC)<br />
<br />
=== Reply to [[User:Sj|Sj]] ===<br />
''Q: Language?''<br><br />
''A:'' To start, Iffy.news will be US only. The language of suspect sites is English. The control group of US daily newspapers are mostly English, with a few in Spanish, Arabic, and several Asian languages. My index of those dailies is at [https://news.pubmedia.us/ NewsNetrics]<br />
<br />
''Q: Evaluating sources based on their article?''<br><br />
''A:'' The evals aren't of articles but of the credibility of the news publisher: the domain name, based on evals by trained reviewers (Media Bias/Fact Check, NewsGuard, etc.). My raw data for those unreliable sites is now in a spreadsheet: [https://docs.google.com/spreadsheets/d/1ck1_FZC-97uDLIlvRJDTrGqBk0FuDe9yHkluROgpGS8/edit#gid=707857677?usp=sharing Iffy 2020-04].<br />
<br />
''Q: How much of the composite credibility score is built ''<br><br />
''A:'' None. Need first to determine the most accurate signals for distinguishing fake-news from fact-based news sites, that can be API-pulled (i.e., year online, Wikipedia article info). The hypothesis is outlined in the repo: [https://github.com/hearvox/unreliable-news/blob/master/topics/credscore.md CredScore]. (Once we know the most accurate signals, we may throw some AI at it to get the best weight for each signal.)<br />
<br />
''Q: Experimented w/ data analysis based on the WP API?''<br><br />
''A:'' I've experimented with the Wikipedia API to confirm that using the news-site name (encoded) as search criteria often returns its article (if one exists), from which I can pull its infobox, from which I can often machine-distinguish fake from fact publications. For instance, see the infoboxes at:<br><br />
https://en.wikipedia.org/wiki/Enid_News_%26_Eagle<br><br />
https://en.wikipedia.org/wiki/NewsPunch<br />
<br />
For related projects I have scripts that regularly and programmatically pull, analyze, and present data from APIs, from sources like Alexa Web Information Service, BuiltWith, Google PageSpeed, Internet Archive, and WebpageTest.org (see my overview: [https://github.com/hearvox/unreliable-news/blob/master/ref/apis-for-fact-checking.md APIs and Tools for fact-checking]). I'll adapt these scripts to the Wikipedia API. <br />
<br />
''Q: How would you make the composite credibility assessments available?''<br><br />
''A:'' Step one would be presenting the data at the site and sharing the raw data via the site, which use the WordPress CMS, so can us its built-n REST API to output JSON. The data will also auto-import in a public Google spreadsheet, for folk who prefer data in that format. (I've configured sites and sheets to share similar data in other projects.)<br />
<br />
If CredScore proves effective at auto-detecting fake news, with a high level of certainty. The next step might be browser extensions and advertising blacklists. Adaptable scripts for both already exist.</div>Hearvoxhttps://wikiconference.org/index.php?title=Talk:2019/Grants/Iffy.news&diff=16107Talk:2019/Grants/Iffy.news2020-04-24T18:47:40Z<p>Hearvox: /* Reply to Sj */</p>
<hr />
<div>Hello Barrett, thanks for this proposal. Could you say a bit more about what you have in mind? <br />
<br />
Some specific questions: <br />
* How are you thinking of evaluating sources based on their article? What aspects of articles would you look at, in which languages? <br />
* How much of the composite credibility score is built now? Is there a demo? <br />
* Have you experimented w/ doing data analysis based on the WP API before? <br />
* How would you make the composite credibility assessments available -- do you have a schema / API in mind for that work?<br />
<br />
Warmly, [[User:Sj|Sj]] ([[User talk:Sj|talk]]) 23:39, 22 April 2020 (UTC)<br />
<br />
=== Reply to [[User:Sj|Sj]] ===<br />
''Q: Language?''<br><br />
''A:'' To start, Iffy.news will be US only. The language of suspect sites is English. The control group of US daily newspapers are mostly English, with a few in Spanish, Arabic, and several Asian languages. My index of those dailies is at [https://news.pubmedia.us/ NewsNetrics]<br />
<br />
''Q: Evaluating sources based on their article?''<br><br />
''A:'' The evals aren't of articles but of the credibility of the news publisher: the domain name, based on evals by trained reviewers (Media Bias/Fact Check, NewsGuard, etc.). My raw data for those unreliable sites is now in a spreadsheet: [https://docs.google.com/spreadsheets/d/1ck1_FZC-97uDLIlvRJDTrGqBk0FuDe9yHkluROgpGS8/edit#gid=707857677?usp=sharing Iffy 2020-04].<br />
<br />
''Q: How much of the composite credibility score is built ''<br><br />
''A:'' None. Need first to determine the most accurate signals for distinguishing fake-news from fact-based news sites, that can be API-pulled (i.e., year online, Wikipedia article info). The hypothesis is outlined in the repo: [https://github.com/hearvox/unreliable-news/blob/master/topics/credscore.md CredScore]. (Once we know the most accurate signals, we may throw some AI at it to get the best weight for each signal.)<br />
<br />
''Q: Experimented w/ data analysis based on the WP API?''<br><br />
''A:'' I've experimented with the Wikipedia API to confirm that using the news-site name (encoded) as search criteria often returns its article (if one exists), from which I can pull in its infobox, from which I can often machine-distinguish fake from fact publications. For instance, see the infoboxes at:<br />
https://en.wikipedia.org/wiki/Enid_News_%26_Eagle<br />
https://en.wikipedia.org/wiki/NewsPunch<br />
<br />
For related projects I have scripts that regularly and programmatically pull, analyze, and present data from APIs, from sources like Alexa Web Information Service, BuiltWith, Google PageSpeed, Internet Archive, and WebpageTest.org (see my overview: [https://github.com/hearvox/unreliable-news/blob/master/ref/apis-for-fact-checking.md APIs and Tools for fact-checking]). I'll adapt these scripts to the Wikipedia API. <br />
<br />
''Q: How would you make the composite credibility assessments available?''<br><br />
''A:'' Step one would be presenting the data at the site and sharing the raw data via the site, which use the WordPress CMS, so can us its built-n REST API to output JSON. The data will also auto-import in a public Google spreadsheet, for folk who prefer data in that format. (I've configured sites and sheets to share similar data in other projects.)<br />
<br />
If CredScore proves effective at auto-detecting fake news, with a high level of certainty. The next step might be browser extensions and advertising blacklists. Adaptable scripts for both already exist.</div>Hearvoxhttps://wikiconference.org/index.php?title=Talk:2019/Grants/Iffy.news&diff=16106Talk:2019/Grants/Iffy.news2020-04-24T18:46:31Z<p>Hearvox: /* Reply to Sj */</p>
<hr />
<div>Hello Barrett, thanks for this proposal. Could you say a bit more about what you have in mind? <br />
<br />
Some specific questions: <br />
* How are you thinking of evaluating sources based on their article? What aspects of articles would you look at, in which languages? <br />
* How much of the composite credibility score is built now? Is there a demo? <br />
* Have you experimented w/ doing data analysis based on the WP API before? <br />
* How would you make the composite credibility assessments available -- do you have a schema / API in mind for that work?<br />
<br />
Warmly, [[User:Sj|Sj]] ([[User talk:Sj|talk]]) 23:39, 22 April 2020 (UTC)<br />
<br />
=== Reply to [[User:Sj|Sj]] ===<br />
''Q: Language?''<br><br />
''A:'' To start, Iffy.news will be US only. The language of suspect sites is English. The control group of US daily newspapers are mostly English, with a few in Spanish, Arabic, and several Asian languages. My index of those dailies is at [https://news.pubmedia.us/ NewsNetrics]<br />
<br />
''Q: Evaluating sources based on their article?''<br><br />
''A:'' The evals aren't of articles but of the credibility of the news publisher: the domain name, based on evals by trained reviewers (Media Bias/Fact Check, NewsGuard, etc.). My raw data for those unreliable sites is now in a spreadsheet: [https://docs.google.com/spreadsheets/d/1ck1_FZC-97uDLIlvRJDTrGqBk0FuDe9yHkluROgpGS8/edit#gid=707857677?usp=sharing Iffy 2020-04].<br />
<br />
''Q: How much of the composite credibility score is built ''<br><br />
''A:'' None. Need first to determine the most accurate signals for distinguishing fake-news from fact-based news sites, that can be API-pulled (i.e., year online, Wikipedia article info). The hypothesis is outlined in the repo: [https://github.com/hearvox/unreliable-news/blob/master/topics/credscore.md CredScore]. (Once we know the most accurate signals, we may throw some AI at it to get the best weight for each signal.)<br />
<br />
''Q: Experimented w/ data analysis based on the WP API?''<br />
''A:'' I've experimented with the Wikipedia API to confirm that using the news-site name (encoded) as search criteria often returns its article (if one exists), from which I can pull in its infobox, from which I can often machine-distinguish fake from fact publications. For instance, see the infoboxes at:<br />
https://en.wikipedia.org/wiki/Enid_News_%26_Eagle<br />
https://en.wikipedia.org/wiki/NewsPunch<br />
<br />
For related projects I have scripts that regularly and programmatically pull, analyze, and present data from APIs, from sources like Alexa Web Information Service, BuiltWith, Google PageSpeed, Internet Archive, and WebpageTest.org (see my overview: [https://github.com/hearvox/unreliable-news/blob/master/ref/apis-for-fact-checking.md APIs and Tools for fact-checking]). I'll adapt these scripts to the Wikipedia API. <br />
<br />
''Q: How would you make the composite credibility assessments available?''<br />
''A:'' Step one would be presenting the data at the site and sharing the raw data via the site, which use the WordPress CMS, so can us its built-n REST API to output JSON. The data will also auto-import in a public Google spreadsheet, for folk who prefer data in that format. (I've configured sites and sheets to share similar data in other projects.)<br />
<br />
If CredScore proves effective at auto-detecting fake news, with a high level of certainty. The next step might be browser extensions and advertising blacklists. Adaptable scripts for both already exist.</div>Hearvoxhttps://wikiconference.org/index.php?title=Talk:2019/Grants/Iffy.news&diff=16105Talk:2019/Grants/Iffy.news2020-04-24T18:41:09Z<p>Hearvox: Answers to Sj's questions</p>
<hr />
<div>Hello Barrett, thanks for this proposal. Could you say a bit more about what you have in mind? <br />
<br />
Some specific questions: <br />
* How are you thinking of evaluating sources based on their article? What aspects of articles would you look at, in which languages? <br />
* How much of the composite credibility score is built now? Is there a demo? <br />
* Have you experimented w/ doing data analysis based on the WP API before? <br />
* How would you make the composite credibility assessments available -- do you have a schema / API in mind for that work?<br />
<br />
Warmly, [[User:Sj|Sj]] ([[User talk:Sj|talk]]) 23:39, 22 April 2020 (UTC)<br />
<br />
=== Reply to [[User:Sj|Sj]] ===<br />
''Q: Language?''<br><br />
''A:'' To start, Iffy.news will be US only. The language of suspect sites is English. The control group of US daily newspapers are mostly English, with a few in Spanish, Arabic, and several Asian languages. My index of those dailies is at [https://news.pubmedia.us/ NewsNetrics]<br />
<br />
''Q: Evaluating sources based on their article?''<br><br />
''A:'' The evals aren't of articles but of the credibility of the news publisher: the domain name, based on evals by trained reviewers (Media Bias/Fact Check, NewsGuard, etc.). My raw data or those unreliable sites is now in a spreadsheet: [https://docs.google.com/spreadsheets/d/1ck1_FZC-97uDLIlvRJDTrGqBk0FuDe9yHkluROgpGS8/edit#gid=707857677?usp=sharing Iffy 2020-04].<br />
<br />
''Q: How much of the composite credibility score is built ''<br><br />
''A:'' None. Need first to determine the most accurate signals for distinguishing fake-news from fact-based news sites, that can be API-pulled (i.e., year online, Wikipedia article info). The hypothesis is outlined in the repo: [https://github.com/hearvox/unreliable-news/blob/master/topics/credscore.md CredScore]. (Once we know the most accurate signals, we may throw some AI at it to get the best weight for each signal.)<br />
<br />
''Q: Experimented w/ data analysis based on the WP API?''<br />
''A:'' I've experimented with the Wikipedia API to confirm that using the news-site name (encoded) as search criteria often returns its article (if one exists), from which I can pull in its infobox, from which I can often machine-distinguish fake from fact publications. For instance, see the infoboxes at:<br />
https://en.wikipedia.org/wiki/Enid_News_%26_Eagle<br />
https://en.wikipedia.org/wiki/NewsPunch<br />
<br />
For related projects I have scripts that regularly and programmatically pull, analyze, and present data from APIs, from sources like Alexa Web Information Service, BuiltWith, Google PageSpeed, Internet Archive, and WebpageTest.org (see my overview: [https://github.com/hearvox/unreliable-news/blob/master/ref/apis-for-fact-checking.md APIs and Tools for fact-checking]). I'll adapt these scripts to the Wikipedia API. <br />
<br />
''Q: How would you make the composite credibility assessments available?''<br />
''A:'' Step one would be presenting the data at the site and sharing the raw data via the site, which use the WordPress CMS, so can us its built-n REST API to output JSON. The data will also auto-import in a public Google spreadsheet, for folk who prefer data in that format. (I've configured sites and sheets to share similar data in other projects.)<br />
<br />
If CredScore proves effective at auto-detecting fake news, with a high level of certainty. The next step might be browser extensions and advertising blacklists. Adaptable scripts for both already exist.</div>Hearvoxhttps://wikiconference.org/index.php?title=User:Hearvox&diff=16102User:Hearvox2020-04-24T13:50:56Z<p>Hearvox: Create user page</p>
<hr />
<div>I am @hearvox: Barrett Golding, a radio and web producer from Bozeman MT USA:<br />
* Fellow at the [https://en.wikipedia.org/wiki/Missouri_School_of_Journalism#Reynolds_Journalism_Institute Reynolds Journalism Institute].<br />
* Executive Producer of [https://hearingvoices.org/ Hearing Voices] public radio series ([https://en.wikipedia.org/wiki/NPR NPR]-distributed and [https://en.wikipedia.org/wiki/Peabody_Award Peabody Award]-winning).<br />
* [https://en.wikipedia.org/wiki/United_States_Artists United States Artists]: [https://en.wikipedia.org/wiki/List_of_United_States_Artists_(USA)_Fellowship_recipients#2010 Fellowship recipient].<br />
* Podcast producer/consultant for [https://en.wikipedia.org/wiki/Human_Rights_Watch Human Rights Watch] and the Southern Poverty Law Center's [https://en.wikipedia.org/wiki/Southern_Poverty_Law_Center#Tolerance.org Teaching Tolerance].<br />
* A.k.a., The Wandering Jew on-air (volunteer DJ at [https://en.wikipedia.org/wiki/KGLT-FM KGLT-FM]).<br />
<br />
https://en.wikipedia.org/wiki/User:Hearvox</div>Hearvoxhttps://wikiconference.org/index.php?title=2019/Grants/Iffy.news&diff=157882019/Grants/Iffy.news2020-04-07T00:56:11Z<p>Hearvox: Use Wiki format for links.</p>
<hr />
<div>{{WCNA 2019 Grant Submission<br />
|name=Barrett Golding<br />
|username=hearvox<br />
|email=bg@hearingvoices.com<br />
|resume=https://www.linkedin.com/in/barrett-golding/<br><br />
https://github.com/hearvox<br><br />
https://profiles.wordpress.org/hearvox/<br><br />
https://hearingvoices.com/best-of-bg/<br />
|geography=USA<br />
|type=Research + Output<br />
|idea=Wikipedia articles for publications can be an indicator of news site reliability. This project will use the Wikipedia API to programmatically determine the likelihood of media credibility (comparing results from my database of reliable US newspaper sites and fake-news sites). This will be combined with other indicators to generate reliability estimates.<br />
|importance=An easy way to assist in evaluating the credibility of a media source.<br />
|inprogress=[https://github.com/hearvox/unreliable-news Unreliable News] repo , [https://github.com/hearvox/unreliable-news/blob/master/topics/credscore.md Cred Score] (hypothesis), [https://fact.pubmedia.us/ Fact-check Feed] (articles by US fact-checkers, 2016–present), [https://hearingvoices.com/tools/checkers/fact-checkers/ Fact Checkers] tool, [https://news.pubmedia.us/ News Netrics] media site performance metrics. All coming together soon at [http://iffy.news Iffy.news].<br />
|relevance=Core to credibility and a new way to use Wikipedia API.<br />
|impact=Can be used by researchers, advertisers, and media consumers to evaluate source reliabiiity.<br />
|scalability=It doesn't need to.<br />
|people=Because it needs to be done and no one is doing it.<br />
|inclusiveness=None that I can think of.<br />
|challenges=None, really, just a lot of work.<br />
|cost=1000<br />
|expenses=Develop the API script to search (by domain name) for Wikipedia articles. Import Wikipedia data. Find signals to programmatically identify articles about un/reliable sources Submit related info to relevant Wikipedia articles (like List_of_fake_news_websites).<br />
|time=1 month<br />
|previous=NEA and CPB grants: https://hearingvoices.com/<br><br />
U of MO J-School grant: https://news.pubmedia.us/ <br><br />
United States Artists fellow: https://www.unitedstatesartists.org/fellow/barrett-golding/<br><br />
Reynolds Journalism Institute fellow:<br />
https://www.rjionline.org/stories/series/storytelling-tools<br />
}}</div>Hearvoxhttps://wikiconference.org/index.php?title=2019/Grants/Iffy.news&diff=157852019/Grants/Iffy.news2020-04-07T00:32:17Z<p>Hearvox: Complete project description. Add line-breaks.</p>
<hr />
<div>{{WCNA 2019 Grant Submission<br />
|name=Barrett Golding<br />
|username=hearvox<br />
|email=bg@hearingvoices.com<br />
|resume=https://www.linkedin.com/in/barrett-golding/<br><br />
https://github.com/hearvox<br><br />
https://profiles.wordpress.org/hearvox/<br><br />
https://hearingvoices.com/best-of-bg/<br />
|geography=USA<br />
|type=Research + Output<br />
|idea=Wikipedia articles for publications can be an indicator of news site reliability. This project will use the Wikipedia API to programmatically determine the likelihood of media credibility (comparing results from my database of reliable US newspaper sites and fake-news sites). This will be combined with other indicators to generate reliability estimates.<br />
|importance=An easy way to assist in evaluating the credibility of a media source.<br />
|inprogress=In dev at: https://github.com/hearvox/unreliable-news<br> <br />
Cred Score (hypothesis):<br />
https://github.com/hearvox/unreliable-news/blob/master/topics/credscore.md<br><br />
Soon to be published at: https://iffy.news<br />
|relevance=Core to credibility and a new way to use Wikipedia API.<br />
|impact=Can be used by researchers, advertisers, and media consumers to evaluate source reliabiiity.<br />
|scalability=It doesn't need to.<br />
|people=Because it needs to be done and no one is doing it.<br />
|inclusiveness=None that I can think of.<br />
|challenges=None, really, just a lot of work.<br />
|cost=1000<br />
|expenses=Developing the API script to search for webpages. Submitting info and results to relevant Wikipedia articles (like List_of_fake_news_websites).<br />
|time=1 month<br />
|previous=NEA and CPB grants: https://hearingvoices.com/<br><br />
U of MO J-School grant: https://news.pubmedia.us/ <br><br />
United States Artists fellow: https://www.unitedstatesartists.org/fellow/barrett-golding/<br><br />
Reynolds Journalism Institute fellow:<br />
https://www.rjionline.org/stories/series/storytelling-tools<br />
}}</div>Hearvoxhttps://wikiconference.org/index.php?title=2019/Grants/Iffy.news&diff=157812019/Grants/Iffy.news2020-04-07T00:13:19Z<p>Hearvox: Draft</p>
<hr />
<div>{{WCNA 2019 Grant Submission<br />
|name=Barrett Golding<br />
|username=hearvox<br />
|email=bg@hearingvoices.com<br />
|resume=https://www.linkedin.com/in/barrett-golding/<br />
https://github.com/hearvox<br />
https://profiles.wordpress.org/hearvox/<br />
https://hearingvoices.com/best-of-bg/<br />
|geography=USA<br />
|type=Research + Output<br />
|idea=Use Wikipedia pages for publications as an indicator for news site reliability.<br />
|importance=An easy way to determine the credibility of a media source.<br />
|inprogress=Yes: https://iffy.news<br />
|relevance=Core to credibility and a new way to use Wikipedia API.<br />
|impact=Can be used by researchers, advertisers, and media consumers to evaluate source reliabiiity.<br />
|scalability=It doesn't need to.<br />
|people=Because it needs to be done and no one is doing it.<br />
|inclusiveness=None that I can think of.<br />
|challenges=Manual inspection of results will slow the process down.<br />
|cost=1000<br />
|expenses=Developing the API script to search for webpages. Adding info to associated Wikipedia Pages.<br />
|time=1 month<br />
|previous=Yes, many. NEA and CPB grants: https://hearingvoices.com/<br />
U of MO J-School grant: https://news.pubmedia.us/ <br />
United States Artists fellow: https://www.unitedstatesartists.org/fellow/barrett-golding/<br />
Reynolds Journalism Institute fellow:<br />
https://www.rjionline.org/stories/series/storytelling-tools<br />
}}</div>Hearvox