Comments on: FORCE2015 £1k Challenge Winner https://force11.org/group/force2015-1k-challenge-winner/ The Future of Research Communications and e-Scholarship Thu, 26 May 2022 13:42:56 +0000 hourly 1 By: Werner Liebregts https://force11.org/group/force2015-1k-challenge-winner/#comment-11160 Mon, 11 May 2015 07:45:17 +0000 https://staging2.simonw59.sg-host.com/force2015-1k-challenge-winner/#comment-11160 In reply to Eduard Hovy.

Answers

Let's try to find these answers together, Eduard! I already have my own thoughts (of course), but I do not want to harm the discussion by giving my own opinion in advance. You are right in saying that it should not be a self-promoting discussion, but a fully objective and open one instead. At the same time, this also means that I expect none of the discussants to be narrow-minded or prejudiced.

]]>
By: Eduard Hovy https://force11.org/group/force2015-1k-challenge-winner/#comment-11159 Fri, 08 May 2015 14:09:11 +0000 https://staging2.simonw59.sg-host.com/force2015-1k-challenge-winner/#comment-11159 In reply to Werner Liebregts.

1K PPPR Discussion

It depends; if the discussion is thoughtful and not self-promoting or stupid, then I would be happy to contribute.  But if it is simply an exercise in sloppy wishful thinking then I will not participate. 

So far I see no valid answers to the following problems with PPPR:

– why peers would take time to create new reviews post-publication (real peers, not the author's own graduate students and friends and family)

– who would select these peers and check over their reviews to remove problematic content

– where the reviews would be hosted, for how long, and how they would be organized

– who would pay attention to these reviews post-publication, since traditional citation counts are already giving much of the information we want

]]>
By: Werner Liebregts https://force11.org/group/force2015-1k-challenge-winner/#comment-11158 Fri, 08 May 2015 08:28:15 +0000 https://staging2.simonw59.sg-host.com/force2015-1k-challenge-winner/#comment-11158 In reply to Eduard Hovy.

Coming up: discussion forums

Dear Eduard, I'm about to set up the first discussion forum about part of the issues that you've raised. Will you be involved in the discussions to also answer the questions that you posed?

]]>
By: Eduard Hovy https://force11.org/group/force2015-1k-challenge-winner/#comment-11157 Wed, 06 May 2015 18:14:44 +0000 https://staging2.simonw59.sg-host.com/force2015-1k-challenge-winner/#comment-11157 A response to Liebregts

Mr. Liebregts, one of the winners of the 1k challenge, proposes PPPR as a model to 'fix' the peer reviewing process.  There are several flaws with this model and with the eay it is proposed. 

We have all felt the sting of disappointment when our (oh so excellent) submission was rejected by incompetent or just shoddy reviewers.  And sometimes we’ve also felt also the compensatory (although somehow never quite adequate) satisfaction when the same submission is accepted elsewhere. 

How to fix the reviewing process? 

One can take at least four distinct approaches:

  1. Simply not review at all.  Accept everything that is submitted.  But then the filtering and/or quality control one expects from a journal or conference is lost.  Leaving it to the reader or attendee to wade through thousands of submissions in order to select what to see can be extremely inefficient.  Some organizations (like the Society for Neuroinformatics) routinely accept close to 1000 abstracts at each conference, making it a daunting proposition to decide what to actually look at. 
  2. Hire professional, highly informed, reviewers.  This is a nice ideal but not practical: it is expensive, time-consuming, and simply not an option for smaller conferences and other meetings.  Even if the budget were available, the appropriate reviewers are probably not; they are doing the actual research! 
  3. Convene committees of experts as needed for the purpose, drawing from the relevant community.  This is the current situation, which sometimes results in uninformed or sloppy reviewing. 
  4. PPPR: Accept and publish everything, but institute a procedure of post-publication review.  This is the position Mr. Liebregts espouses: “Only a few reviewers assess the quality of papers before publication. Many more experts read them after publication, have a strong opinion about their contents, but are hardly able to share it to a wide audience. Academics would greatly benefit from crowdreviewing, a post-publication peer review (PPPR) process in which anyone who has read a scientific article is asked to review it according to a standardized set of questions. Today’s consumers themselves decide whether the consumed good meets their quality standards, and this is quickly and easily shared with the rest of the world.”

Is PPPR a valid option?  While it sounds appealingly democratic, there are at least three problems that make this unworkable and rather naive:

  1. Organization: How is the PPPR process managed?  When do reviews get made, and who collects them?  Where are the reviews maintained?  By whom?  What is the process of organizing, balancing, and somehow standardizing them so that some sort of informative collective ‘wisdom’ can emerge?  Is there only one repository of reviews for a published paper?  If so, who enforces this?  If not, how does one find what is most informative?  One might respond: “Oh, this is easy.  The publisher of the journal or proceedings is responsible for these functions”.  But who would pay them?  Is there some sort of central editor or board who try to organize the reviews to make them readable, check for ad hominem and objectionable reviews, etc.?  Who appoints and pays for this?  For how many years?  Without a clearly thought-out procedure, and a model of the organization and finances, the PPPR idea is an unworkable dream. 
  2. Quality: The reason that academic communities appoint reviewers is to perform quality control.  If Science or Nature started next month publishing literally everything they received, from high school papers to crank science about 7-dimensional aliens with anti-gravity spaceships, their value as a source of informed and responsible information drops to zero.  Not even Mr. Liebregts would read them.  One can respond: “Oh, I don’t mean completely remove reviewers!  Just add a post-publication review board as well!”.  But then we have returned to the central objection.  The problem Mr. Liebregts is trying to solve is with current reviewers, and this response does not remove them or curtail their influence.  In fact, it is already possible for a paper that was rejected in one place and published somewhere else to garner additional critical attention and to be then re-published, for example in a collected volume of the most influential papers.  This has occurred for two centuries already.  PPPR adds nothing new here. 
  3. Integrity: If, as the PPPR idea advocates, literally anybody can be a [post-publication] reviewer, what prevents unscrupulous authors to canvass reviews in their favour?  Even worse, an author might hire dozens or hundreds of people to fill in some quasi-review commentary template and submit this to whatever PPPR management process exists, in order to boost his or her academic standing artificially.  While the present-day problem of possible uninformed or sloppy reviewers is real, surely this situation is a lot worse.  This sort of canvassing subverts the goals of academic review, namely to provide some sort of (at least ideally) semi-objective judgment plus suggestions for improvement.  It turns reviewing into a political popularity contest.  One can stipulate (as Mr. Liebregts does): “Oh, these post-publication reviews have to be written by peers, not by anyone!  And in fact they have to review according to a standardized set of questions!”.  But the same sorts of management questions arise: who determines who is a ‘real’ peer?  Do people have to authenticate themselves?  Can they simply join a professional organization and thereby self-declare to be legitimate peers?  What happens if they do not follow the standardized questions?  Who edits their reviews?  In practice, how many academics will actually voluntarily fill in and submit such reviews?  (Most academics I know are already suffering from a reviewing burden.)  Again, it will be up to the author to canvass for voluntary (or paid) reviewers.  And in fact, doing so is already possible: nothing stops a researcher from canvassing fake reviews for his or her paper or position today.  So PPPR does not in fact suggest anything revolutionary; we simply don’t do it today because it adds no value. 

This discussion is about policy, and is political, not scientific.  Mr. Liebregts obtained his $1K challenge prize by a process of canvassing, and is exercising his right to argue his point.  He is providing us as academics a chance to exercise clear thinking and responsible reflection of one of our time-honed practices.

]]>
By: Stephanie Hagstrom https://force11.org/group/force2015-1k-challenge-winner/#comment-11156 Mon, 04 May 2015 23:26:56 +0000 https://staging2.simonw59.sg-host.com/force2015-1k-challenge-winner/#comment-11156 Adding Comment from deWaard Blog regarding 1k Challenge

Are the Crowds Really All That Wise? What the FORCE11 1k Challenge Taught us About Crowdreview

]]>