the Neurosynth viewer goes modular and open source

If you’ve visited the Neurosynth website lately, you may have noticed that it looks… the same way it’s always looked. It hasn’t really changed in the last ~20 months, despite the vague promise on the front page that in the next few months, we’re going to do X, Y, Z to improve the functionality. The lack of updates is not by design; it’s because until recently I didn’t have much time to work on Neurosynth. Now that much of my time is committed to the project, things are moving ahead pretty nicely, though the changes behind the scenes aren’t reflected in any user-end improvements yet.

The github repo is now regularly updated and even gets the occasional contribution from someone other than myself; I expect that to ramp up considerably in the coming months. You can already use the code to run your own automated meta-analyses fairly easily; e.g., with everything set up right (follow the Readme and examples in the repo), the following lines of code:

dataset = cPickle.load(open('dataset.pkl', 'rb'))
studies = get_ids_by_expression("memory* &~ ("wm|working|episod*"), threshold=0.001)
ma = meta.MetaAnalysis(dataset, studies)
ma.save_results('memory')

…will perform an automated meta-analysis of all studies in the Neurosynth database that use the term ‘memory’ at a frequency of 1 in 1,000 words or greater, but don’t use the terms wm or working, or words that start with ‘episod’ (e.g., episodic). You can perform queries that nest to arbitrary depths, so it’s a pretty powerful engine for quickly generating customized meta-analyses, subject to all of the usual caveats surrounding Neurosynth (i.e., that the underlying data are very noisy, that terms aren’t mental states, etc.).

Anyway, with the core tools coming along, I’ve started to turn back to other elements of the project, starting with the image viewer. Yesterday I pushed the first commit of a new version of the viewer that’s currently on the Neurosynth website. In the next few weeks, this new version will be replacing the current version of the viewer, along with a bunch of other changes to the website.

A live demo of the new viewer is available here. It’s not much to look at right now, but behind the scenes, it’s actually a huge improvement on the old viewer in a number of ways:

  • The code is completely refactored and is all nice and object-oriented now. It’s also in CoffeeScript, which is an alternative and (if you’re coming from a Python or Ruby background) much more readable syntax for JavaScript. The source code is on github and contributions are very much encouraged. Like most scientists, I’m generally loathe to share my code publicly because I think it sucks most of the time. But I actually feel pretty good about this code. It’s not good code by any stretch, but I think it rises to the level of ‘mostly sensible’, which is about as much as I can hope for.
  • The viewer now handles multiple layers simultaneously, with the ability to hide and show layers, reorder them by dragging, vary the transparency, assign different color palettes, etc. These features have been staples of offline viewers pretty much since the prehistoric beginnings of fMRI time, but they aren’t available in the current Neurosynth viewer or most other online viewers I’m aware of, so this is a nice addition.
  • The architecture is modular, so that it should be quite easy in future to drop in other alternative views onto the data without having to muck about with the app logic. E.g., adding a 3D WebGL-based view to complement the current 2D slice-based HTML5 canvas approach is on the near-term agenda.
  • The resolution of the viewer is now higher–up from 4 mm to 2 mm (which is the most common native resolution used in packages like SPM and FSL). The original motivation for downsampling to 4 mm in the prior viewer was to keep filesize to a minimum and speed up the initial loading of images. But at some point I realized, hey, we’re living in the 21st century; people have fast internet connections now. So now the files are all in 2 mm resolution, which has the unpleasant effect of increasing file sizes by a factor of about 8, but also has the pleasant effect of making it so that you can actually tell what the hell you’re looking at.

Most importantly, there’s now a clean, and near-complete, separation between the HTML/CSS content and the JavaScript code. Which means that you can now effectively drop the viewer into just about any HTML page with just a few lines of code. So in theory, you can have basically the same viewer you see in the demo just by sticking something like the following into your page:

 viewer = Viewer.get('#layer_list', '.layer_settings')
 viewer.addView('#view_axial', 2);
 viewer.addView('#view_coronal', 1);
 viewer.addView('#view_sagittal', 0);
 viewer.addSlider('opacity', '.slider#opacity', 'horizontal', 'false', 0, 1, 1, 0.05);
 viewer.addSlider('pos-threshold', '.slider#pos-threshold', 'horizontal', 'false', 0, 1, 0, 0.01);
 viewer.addSlider('neg-threshold', '.slider#neg-threshold', 'horizontal', 'false', 0, 1, 0, 0.01);
 viewer.addColorSelect('#color_palette');
 viewer.addDataField('voxelValue', '#data_current_value')
 viewer.addDataField('currentCoords', '#data_current_coords')
 viewer.loadImageFromJSON('data/MNI152.json', 'MNI152 2mm', 'gray')
 viewer.loadImageFromJSON('data/emotion_meta.json', 'emotion meta-analysis', 'bright lights')
 viewer.loadImageFromJSON('data/language_meta.json', 'language meta-analysis', 'hot and cold')
 viewer.paint()

Well, okay, there are some other dependencies and styling stuff you’re not seeing. But all of that stuff is included in the example folder here. And of course, you can modify any of the HTML/CSS you see in the example; the whole point is that you can now easily style the viewer however you want it, without having to worry about any of the app logic.

What’s also nice about this is that you can easily pick and choose which of the viewer’s features you want to include in your page; nothing will (or at least, should) break no matter what you do. So, for example, you could decide you only want to display a single view showing only axial slices; or to allow users to manipulate the threshold of layers but not their opacity; or to show the current position of the crosshairs but not the corresponding voxel value; and so on. All you have to do is include or exclude the various addSlider() and addData() lines you see above.

Of course, it wouldn’t be a mediocre open source project if it didn’t have some important limitations I’ve been hiding from you until near the very end of this post (hoping, of course, that you wouldn’t bother to read this far down). The biggest limitation is that the viewer expects images to be in JSON format rather than a binary format like NIFTI or Analyze. This is a temporary headache until I or someone else can find the time and motivation to adapt one of the JavaScript NIFTI readers that are already out there (e.g., Satra Ghosh‘s parser for xtk), but for now, if you want to load your own images, you’re going to have to take the extra step of first converting them to JSON. Fortunately, the core Neurosynth Python package has a img_to_json() method in the imageutils module that will read in a NIFTI or Analyze volume and produce a JSON string in the expected format. Although I’m pretty sure it doesn’t handle orientation properly for some images, so don’t be surprised if your images look wonky. (And more importantly, if you fix the orientation issue, please commit your changes to the repo.)

In any case, as long as you’re comfortable with a bit of HTML/CSS/JavaScript hacking, the example/ folder in the github repo has everything you need to drop the viewer into your own pages. If you do use this code internally, please let me know! Partly for my own edification, but mostly because when I write my annual progress reports to the NIH, it’s nice to be able to truthfully say, “hey, look, people are actually using this neat thing we built with taxpayer money.”

tracking replication attempts in psychology–for real this time

I’ve written a few posts on this blog about how the development of better online infrastructure could help address and even solve many of the problems psychologists and other scientists face (e.g., the low reliability of peer review, the ‘fudge factor’ in statistical reporting, the sheer size of the scientific literature, etc.). Actually, that general question–how we can use technology to do better science–occupies a good chunk of my research these days (see e.g., Neurosynth). One question I’ve been interested in for a long time is how to keep track not only of ‘successful’ studies (i.e., those that produce sufficiently interesting effects to make it into the published literature), but also replication failures (or successes of limited interest) that wind up in researchers’ file drawers. A couple of years ago I went so far as to build a prototype website for tracking replication attempts in psychology. Unfortunately, it never went anywhere, partly (okay, mostly) because the site really sucked, and partly because I didn’t really invest much effort in drumming up interest (mostly due to lack of time). But I still think the idea is a valuable one in principle, and a lot of other people have independently had the same idea (which means it must be right, right?).

Anyway, it looks like someone finally had the cleverness, time, and money to get this right. Hal Pashler, Sean Kang*, and colleagues at UCSD have been developing an online database for tracking attempted replications of psychology studies for a while now, and it looks like it’s now in beta. PsychFileDrawer is a very slick, full-featured platform that really should–if there’s any justice in the world–provide the kind of service everyone’s been saying we need for a long time now. If it doesn’t work, I think we’ll have some collective soul-searching to do, because I don’t think it’s going to get any easier than this to add and track attempted replications. So go use it!

 

*Full disclosure: Sean Kang is a good friend of mine, so I’m not completely impartial in plugging this (though I’d do it anyway). Sean also happens to be amazingly smart and in search of a faculty job right now. If I were you, I’d hire him.

the short but eventful magnetosensing life of cows

I’ve given several talks in the last few months about the Neurosynth framework, which is designed to help facilitate large-scale automated meta-analysis of fMRI data (see this paper, or these slides from my most recent talk). On a couple of occasions, I’ve decided to start out by talking about something other than brains. In particular, I’ve opted to talk about cows. Specifically, the cows in this study:

…in which the authors–Sabine Begall and colleagues–took Google Earth satellite imagery like this (yes, those tiny ant-like blobs are cows):

…and performed the clever trick of using Google Earth to determine that cows (and deer too!) naturally tend to align themselves along a geomagnetic north-south axis. In other words, cows have magnets in their brains! You have to admit that’s pretty amazing (unless you’re the kind of person who refuses to admit anything is amazing in the presence of other people, even though you secretly look them up and marvel at them later when you’re alone in your bedroom).

Now, superficially, this finding doesn’t actually have very much to do with any of the work I’ve done recently. Okay, not just superficially; it really has absolutely nothing to do with any of the work I’ve done recently. But the more general point I was trying to make was that advances in technology often allow us to solve scientific problems we couldn’t address before, even when the technology in question was originally designed for very different purposes (and I’m pretty confident that Google Earth wasn’t conceived as a means of studying cow alignment). That’s admittedly a bit totally grandiose inasmuch as none of the work I’ve done on Neurosynth is in any way comparable to the marvel that is Google Earth. But, you know, it’s the principle that counts. And the principle is that we should try to use the technology we have (and here I’m just talking about the web, not billion dollar satellites) to do neat scientific things.

Anyway, I was feeling quite pleased with myself for coming up with this completely tangential introduction–so much so that I used it in two or three talks to great success confuse the hell out of the audience. But then one day I made a horrible mistake. And that horrible mistake was to indulge the nagging little voice that kept saying, come now, cows with magnetic brains? really? maybe you should double-check this, just to make sure. So the last time I was about to use the cow slides, I went and did a lit search just to make sure I was still on the cutting edge of the bovine geomagnetic sensing literature. Well, as it turns out I was NOT on the cutting edge! I’d fallen off the edge! Way off! Just a few months ago, you see, this little gem popped up in the literature:

Basically the authors tried to replicate the Begall et al findings and couldn’t. They argued that the original findings were likely due to poor satellite imagery coupled with confirmation bias. So it now appears that cows don’t have the foggiest conception of magnetic fields after all. They just don’t get to join the sparrow-and-spiny-lobster club, no matter how much they whine to the bouncer at the door. Which leads me to my current predicament: what the hell should I do about the cow slides I went to the trouble of making? (Yes, this is the kind of stuff I worry about at midnight on a Wednesday after I’ve written as many job application cover letters as I can deal with in one night, and have safely verified that my Netflix Instant queue contains 233 movies I have no interest at all in watching.)

I suppose the reasonable thing to do would be to jettison the cow slides entirely. But I don’t really want to do that. It’s not like there’s a lack of nifty* and unexpected uses of technology to solve scientific problems; it’s just that, you know, I kind of got attached to this particular example. Plus I’m lazy and don’t want to revise my slides if I can help it. The last time I presented the cow slides in a talk–which was after I discovered that cows don’t know the north-south axis from a hole in the ground–I just added a slide showing the image of the Hert et al rebuttal paper you see above, and called it a “postscript”. Then I made some lame comment about how, hah, you see, just like you can Google Earth to discover interesting new findings, you can also use it to debunk interesting spurious findings, so that’s still okay! But that’s not going to cut it; I’m thinking that next time out, I’m going to have to change things up. Still, to minimize effort, maybe I’ll keep the Google Earth thing going, but simply lose the cows. Instead, I can talk about, I don’t know, using satellite imagery to discover long-buried Mayan temples and Roman ruins. That still sort of counts as science, right?

 

 

* Does anyone still use the word ‘nifty’ in casual conversation? No? Well I like it, so there.