building better platforms for evaluating science: a request for feedback

UPDATE 4/20/2012: a revised version of the paper mentioned below is now available here.

A couple of months ago I wrote about a call for papers for a special issue of Frontiers in Computational Neuroscience focusing on “Visions for Open Evaluation of Scientific Papers by Post-Publication Peer Review“. I wrote a paper for the issue, the gist of which is that many of the features scientists should want out of a next-generation open evaluation platform are already implemented all over the place in social web applications, so that building platforms for evaluating scientific output should be more a matter of adapting existing techniques than having to come up with brilliant new approaches. I’m talking about features like recommendation engines, APIs, and reputation systems, which you can find everywhere from Netflix to Pandora to Stack Overflow to Amazon, but (unfortunately) virtually nowhere in the world of scientific publishing.

Since the official deadline for submission is two months away (no, I’m not so conscientious that I habitually finish my writing assignments two months ahead of time–I just failed to notice that the deadline had been pushed way back), I figured I may as well use the opportunity to make the paper openly accessible right now in the hopes of soliciting some constructive feedback. This is a topic that’s kind of off the beaten path for me, and I’m not convinced I really know what I’m talking about (well, fine, I’m actually pretty sure I don’t know what I’m talking about), so I’d love to get some constructive criticism from people before I submit a final version of the manuscript. Not only from scientists, but ideally also from people with experience developing social web applications–or actually, just about anyone with good ideas about how to implement and promote next-generation evaluation platforms. I mean, if you use Netflix or reddit regularly, you’re pretty much a de facto expert on collaborative filtering and recommendation systems, right?

Anyway, here’s the abstract:

Traditional pre-publication peer review of scientific output is a slow, inefficient, and unreliable process. Efforts to replace or supplement traditional evaluation models with open evaluation platforms that leverage advances in information technology are slowly gaining traction, but remain in the early stages of design and implementation. Here I discuss a number of considerations relevant to the development of such platforms. I focus particular attention on three core elements that next-generation evaluation platforms should strive to emphasize, including (a) open and transparent access to accumulated evaluation data, (b) personalized and highly customizable performance metrics, and (c) appropriate short-term incentivization of the userbase. Because all of these elements have already been successfully implemented on a large scale in hundreds of existing social web applications, I argue that development of new scientific evaluation platforms should proceed largely by adapting existing techniques rather than engineering entirely new evaluation mechanisms. Successful implementation of open evaluation platforms has the potential to substantially advance both the pace and the quality of scientific publication and evaluation, and the scientific community has a vested interest in shifting towards such models as soon as possible.

You can download the PDF here (or grab it from SSRN here). It features a cameo by Archimedes and borrows concepts liberally from sites like reddit, Netflix, and Stack Overflow (with attribution, of course). I’d love to hear your comments; you can either leave them below or email me directly. Depending on what kind of feedback I get (if any), I’ll try to post a revised version of the paper here in a month or so that works in people’s comments and suggestions.

(fanciful depiction of) Archimedes, renowned ancient Greek mathematician and co-inventor (with Al Gore) of the open access internet repository

6 thoughts on “building better platforms for evaluating science: a request for feedback”

  1. This is a really pertinent and interesting article, Tal. This resonates with much of the last 3 months of news in The Chronicle, with call after call for improving peer review both technologically and financially.

    One citation that you may be aware of but seemed very relevant is the Shakespeare Quarterly open review experiment (http://mediacommons.futureofthebook.org/mcpress/ShakespeareQuarterly_NewMedia/). If you read the first section titled “From the Editor: Gentle Numbers” there is a summary of what the project highlights about the peer review process, which might be of interest for your article.

    One thought on framing: I found the use of Archimedes in the conclusion very concise and clearly leads into the rest of your conclusion, and the clarity here made me think that the discussion of Archimedes in the intro section transitioned less seamlessly. One thought I had was to focus more on the role of his (relatively) unthanked contemporaries in preserving his work as analogous to the unthanked work of reviewers in the modern peer review system. Not sure where to go from there, but hopefully this is helpful for you. 🙂

    Your article highlights many of the peer review problems that we’re trying to fix with our new application Scholastica (www.scholasticahq.com). We developed our core features based on interviews with ~50 academics involved in peer review as authors, reviewers, and editors – so we were excited to see our “findings” about the core problems in peer review reflected in your own analysis.

    Keep up the good work, and good luck with the submission!

  2. As you perhaps know, sabermetrics (the quantitive analysis of baseball, by extent of any sport) has shown outstanding development mostly through post-publication, internet-based comments rather than by peer-review process (although this process do exist, its quality is not highly praised by the sabermetrics community).

    In any case, the issue you raise has been discussed in the sabermetrician circle, and I just thought you it could help your thought process.

    See http://www.insidethebook.com/ee/index.php/site/comments/peer_review_v_internet_debate/#comments

    and

    http://sabermetricresearch.blogspot.com/2010/01/evaluating-scientific-debates-some.html

  3. I enjoyed very much the ideas you present in the paper. I couldn’t agree more with what you propose. However, I think one point should have been discussed more extensively; and that is on whose responsibility it is to set things in action. It is clear that vast resources are necessary to implement a platform like you propose, and also resources are needed to maintain it. Could it be a startup that maintains costs by supporting ads? Or should some university take responsibility, as in the case of arXiv, and get support from other universities? Or maybe it should be a more formal body, some kind of NSF thing?

    There is no simple answer and it’s probably past the deadline now anyway, but nonetheless this might be something worthwhile thinking about.

  4. There are numerous instances where we come across different kinds of individuals. There are so many various kinds of individuals that numerous of them could be poor and there could be some people who have a criminal past. This means that they could have their criminal nature lurking somewhere in their hearts and this could cause them to indulge in some other criminal activity soon. So, you have to make sure that you do a Background Check to prevent obtaining too close to such people.

Leave a Reply