building better platforms for evaluating science: a request for feedback

UPDATE 4/20/2012: a revised version of the paper mentioned below is now available here.

A couple of months ago I wrote about a call for papers for a special issue of Frontiers in Computational Neuroscience focusing on “Visions for Open Evaluation of Scientific Papers by Post-Publication Peer Review“. I wrote a paper for the issue, the gist of which is that many of the features scientists should want out of a next-generation open evaluation platform are already implemented all over the place in social web applications, so that building platforms for evaluating scientific output should be more a matter of adapting existing techniques than having to come up with brilliant new approaches. I’m talking about features like recommendation engines, APIs, and reputation systems, which you can find everywhere from Netflix to Pandora to Stack Overflow to Amazon, but (unfortunately) virtually nowhere in the world of scientific publishing.

Since the official deadline for submission is two months away (no, I’m not so conscientious that I habitually finish my writing assignments two months ahead of time–I just failed to notice that the deadline had been pushed way back), I figured I may as well use the opportunity to make the paper openly accessible right now in the hopes of soliciting some constructive feedback. This is a topic that’s kind of off the beaten path for me, and I’m not convinced I really know what I’m talking about (well, fine, I’m actually pretty sure I don’t know what I’m talking about), so I’d love to get some constructive criticism from people before I submit a final version of the manuscript. Not only from scientists, but ideally also from people with experience developing social web applications–or actually, just about anyone with good ideas about how to implement and promote next-generation evaluation platforms. I mean, if you use Netflix or reddit regularly, you’re pretty much a de facto expert on collaborative filtering and recommendation systems, right?

Anyway, here’s the abstract:

Traditional pre-publication peer review of scientific output is a slow, inefficient, and unreliable process. Efforts to replace or supplement traditional evaluation models with open evaluation platforms that leverage advances in information technology are slowly gaining traction, but remain in the early stages of design and implementation. Here I discuss a number of considerations relevant to the development of such platforms. I focus particular attention on three core elements that next-generation evaluation platforms should strive to emphasize, including (a) open and transparent access to accumulated evaluation data, (b) personalized and highly customizable performance metrics, and (c) appropriate short-term incentivization of the userbase. Because all of these elements have already been successfully implemented on a large scale in hundreds of existing social web applications, I argue that development of new scientific evaluation platforms should proceed largely by adapting existing techniques rather than engineering entirely new evaluation mechanisms. Successful implementation of open evaluation platforms has the potential to substantially advance both the pace and the quality of scientific publication and evaluation, and the scientific community has a vested interest in shifting towards such models as soon as possible.

You can download the PDF here (or grab it from SSRN here). It features a cameo by Archimedes and borrows concepts liberally from sites like reddit, Netflix, and Stack Overflow (with attribution, of course). I’d love to hear your comments; you can either leave them below or email me directly. Depending on what kind of feedback I get (if any), I’ll try to post a revised version of the paper here in a month or so that works in people’s comments and suggestions.

(fanciful depiction of) Archimedes, renowned ancient Greek mathematician and co-inventor (with Al Gore) of the open access internet repository