If we already understood the brain, would we even know it?

The question posed in the title is intended seriously. A lot of people have been studying the brain for a long time now. Most of these people, if asked a question like “so when are you going to be able to read minds?”, will immediately scoff and say something to the effect of we barely understand anything about the brain–that kind of thing is crazy far into the future! To a non-scientist, I imagine this kind of thing must seem bewildering. I mean, here we have a community of tens of thousands of extremely smart people who have collectively been studying the same organ for over a hundred years; and yet, almost to the last person, they will adamantly proclaim to anybody who listens that the amount they currently know about the brain is very, very small compared to the amount that they expect the human species to know in the future.

I’m not convinced this is true. I think it’s worth observing that if you ask someone who has just finished telling you how little we collectively know about the brain how much they personally actually know about the brain–without the implied contrast with the sum of all humanity–they will probably tell you that, actually, they kind of know a lot about the brain (at least, once they get past the false modesty). Certainly I don’t think there are very many neuroscientists running around telling people that they’ve literally learned almost nothing since they started studying the gray sludge inside our heads. I suspect most neuroanatomists could probably recite several weeks’ worth of facts about the particular brain region or circuit they study, and I have no shortage of fMRI-experienced friends who won’t shut up about this brain network or that brain region–so I know they must know a lot about something to do with the brain. We thus find ourselves in the rather odd situation of having some very smart people apparently simultaneously believe that (a) we all collectively know almost nothing, and (b) they personally are actually quite learned (pronounced luhrn-ED) in their chosen subject. The implication seems to be that, if we multiply what one really smart present-day neuroscientist knows a few tens of thousands of times, that’s still only a tiny fraction of what it would take to actually say that we really “understand” the brain.

I find this problematic in two respects. First, I think we actually already know quite a lot about the brain. And second, I don’t think future scientists–who, remember, are people similar to us in both number and intelligence–will know dramatically more. Or rather, I think future neuroscientists will undoubtedly amass orders of magnitude more collective knowledge about the brain than we currently possess. But, barring some momentous fusion of human and artificial intelligence, I’m not at all sure that will translate into a corresponding increase in any individual neuroscientist’s understanding. I’m willing to stake a moderate sum of money, and a larger amount of dignity, on the assertion that if you ask a 2030, 2050, or 2118 neuroscientist–assuming both humans and neuroscience are still around then–if they individually understand the brain given all of the knowledge we’ve accumulated, they’ll laugh at you in exactly the way that we laugh at that question now.

* * *

We probably can’t predict when the end of neuroscience will arrive with any reasonable degree of accuracy. But trying to conjure up some rough estimates can still help us calibrate our intuitions about what would be involved. One way we can approach the problem is to try to figure out at what rate our knowledge of the brain would have to grow in order to arrive at the end of neuroscience within some reasonable time frame.

To do this, we first need an estimate of how much more knowledge it would take before we could say with a straight face that we understand the brain. I suspect that “1000 times more” would probably seem like a low number to most people. But let’s go with that, for the sake of argument. Let’s suppose that we currently know 0.1% of all there is to know about the brain, and that once we get to 100%, we will be in a position to stop doing neuroscience, because we will at that point already have understood everything.

Next, let’s pick a reasonable-sounding time horizon. Let’s say… 200 years. That’s twice as long as Eric Kandel thinks it will take just to understand memory. Frankly, I’m skeptical that humans will still be living on this planet in 200 years, but that seems like a reasonable enough target. So basically, we need to learn 1000 times as much as we know right now in the space of 200 years. Better get to the library! (For future neuroscientists reading this document as an item of archival interest about how bad 2018 humans were at predicting the future: the library is a large, public physical space that used to hold things called books, but now holds only things called coffee cups and laptops.)

A 1000-fold return over 200 years is… 3.5% compounded annually. Hey, that’s actually not so bad. I can easily believe that our knowledge about the brain increases at that rate. It might even be more than that. I mean, the stock market historically gets 6-10% returns, and I’d like to believe that neuroscience outperforms the stock market. Regardless, under what I think are reasonably sane assumptions, I don’t think it’s crazy to suggest that the objective compounding of knowledge might not be the primary barrier preventing future neuroscientists from claiming that they understand the brain. Assuming we don’t run into any fundamental obstacles that we’re unable to overcome via new technology and/or brilliant ideas, we can look forward to a few of our great-great-great-great-great-great-great-great-grandchildren being the unlucky ones who get to shut down all of the world’s neuroscience departments and tell all of their even-less-lucky graduate students to go on home, because there are no more problems left to solve.

Well, except probably not. Because, for the above analysis to go through, you have to believe that there’s a fairly tight relationship between what all of us know, and what any of us know. Meaning, you have to believe that once we’ve successfully acquired all of the possible facts there are to acquire about the brain, there will be some flashing light, some ringing bell, some deep synthesized voice that comes over the air and says, “nice job, people–you did it! You can all go home now. Last one out gets to turn off the lights.”

I think the probability of such a thing happening is basically zero. Partly because the threat to our egos would make it very difficult to just walk away from what we’d spent much of our life doing; but mostly because the fact that somewhere out there there existed a repository of everything anyone could ever want to know about the brain would not magically cause all of that knowledge to be transduced into any individual brain in a compact, digestible form. In fact, it seems like a safe bet that no human (perhaps barring augmentation with AI) would be able to absorb and synthesize all of that knowledge. More likely, the neuroscientists among us would simply start “recycling” questions. Meaning, we would keep coming up with new questions that we believe need investigating, but those questions would only seem worthy of investigation because we lack the cognitive capacity to recognize that the required information is already available–it just isn’t packaged in our heads in exactly the right way.

What I’m suggesting is that, when we say things like “we don’t really understand the brain yet”, we’re not really expressing factual statements about the collective sum of neuroscience knowledge currently held by all human beings. What each of us really means is something more like there are questions I personally am able to pose about the brain that seem to make sense in my head, but that I don’t currently know the answer to–and I don’t think I could piece together the answer even if you handed me a library of books containing all of the knowledge we’ve accumulated about the brain.

Now, for a great many questions of current interest, these two notions clearly happen to coincide–meaning, it’s not just that no single person currently alive knows the complete answer to a question like “what are the neural mechanisms underlying sleep?”, or “how do SSRIs help ameliorate severe depression?”, but that the sum of all knowledge we’ve collectively acquired at this point may not be sufficient to enable any person or group of persons, no matter how smart, to generate a comprehensive and accurate answer. But I think there are also a lot of questions where the two notions don’t coincide. That is, there are many questions neuroscientists are currently asking that we could say with a straight face we do already know how to answer collectively–despite vehement assertions to the contrary on the part of many individual scientists. And my worry is that, because we all tend to confuse our individual understanding (which is subject to pretty serious cognitive limitations) with our collective understanding (which is not), there’s a non-trivial risk of going around in circles. Meaning, the fact that we’re individually not able to understanding something–or are individually unsatisfied with the extant answers we’re familiar with–may lead us to devise ingenious experiments and expend considerable resources trying to “solve” problems that we collectively do already have perfectly good answers to.

Let me give an example to make this more concrete. Many (though certainly not all) people who work with functional magnetic resonance imaging (fMRI) are preoccupied with questions of the form what is the core function of X–where X is typically some reasonably well-defined brain region or network, like the ventromedial prefrontal cortex, the fusiform face area, or the dorsal frontoparietal network. Let’s focus our attention on one network that has attracted particular attention over the past 10 – 15 years: the so-called “default mode” or “resting state” network. This network is notable largely for its proclivity to show increased activity when people are in a state of cognitive rest–meaning, when they’re free to think about whatever they like, without any explicit instruction to direct their attention or thoughts to any particular target or task. A lot of cognitive neuroscientists in recent years have invested time trying to understand the function(s) of the default mode network(DMN; for reviews, see Buckner, Andrews-Hanna, & Schacter, 2008; Andrews-Hanna, 2012; Raichle, 2015). Researchers have observed that the DMN appears to show robust associations with autobiographical memory, social cognition, self-referential processing, mind wandering, and a variety of other processes.

If you ask most researchers who study the DMN if they think we currently understand what the DMN does, I think nearly all of them will tell you that we do not. But I think that’s wrong. I would argue that, depending on how you look at it, we either (a) already do have a pretty good understanding of the “core functions” of the network, or (b) will never have a good answer to the question, because it can’t actually be answered.

The sense in which we already know the answer is that we have pretty good ideas about what kinds of cognitive and affective processes are associated with changes in DMN activity. They include self-directed cognition, autobiographical memory, episodic future thought, stressing out about all the things one has to do in the next few days, and various other things. We know that the DMN is associated with these kinds of processes because we can elicit activation increases in DMN regions by asking people to engage in tasks that we believe engage these processes. And we also know, from both common sense and experience-sampling studies, that when people are in the so-called “resting state”, they disproportionately tend to spend their time thinking about such things. Consequently, I think there’s a perfectly good sense in which we can say that the “core function” of the DMN is nothing more and nothing less than supporting the ability to think about things that people tend to think about when they’re at rest. And we know, to a first order of approximation, what those are.

In my anecdotal experience, most people who study the DMN are not very satisfied with this kind of answer. Their response is usually something along the lines of: but that’s just a description of what kinds of processes tend to co-occur with DMN activation. It’s not an explanation of why the DMN is necessary for these functions, or why these particular brain regions are involved.

I think this rebuttal is perfectly reasonable, inasmuch as we clearly don’t have a satisfying computational account of why the DMN is what it is. But I don’t think there can be a satisfying account of this kind. I think the question itself is fundamentally ill-posed. Taking it seriously requires us to assume that, just because it’s possible to observe the DMN activate and deactivate with what appears to be a high degree of coherence, there must be a correspondingly coherent causal characterization of the network. But there doesn’t have to be–and if anything, it seems exceedingly unlikely that there’s any such an explanation to be found. Instead, I think the seductiveness of the question is largely an artifact of human cognitive biases and limitations–and in particular, of the burning human desire for simple, easily-digested explanations that can fit inside our heads all at once.

It’s probably easiest to see what I mean if we consider another high-profile example from a very different domain. Consider the so-called “general factor” of fluid intelligence (gF). Over a century of empirical research on individual differences in cognitive abilities has demonstrated conclusively that nearly all cognitive ability measures tend to be positively and substantially intercorrelated–an observation Spearman famously dubbed the “positive manifold” all the way back in 1904. If you give people 20 different ability measures and do a principal component analysis (PCA) on the resulting scores, the first component will explain a very large proportion of the variance in the original measures. This seemingly important observation has led researchers to propose all kinds of psychological and biological theories intended to explain why and how people could vary so dramatically on a single factor–for example, that gF reflects differences in the ability to control attention in the face of interference (e.g., Engle et al., 1999); that “the crucial cognitive mechanism underlying fluid ability lies in storage capacity” (Chuderski et al., 2012); that “a discrete parieto-frontal network underlies human intelligence” (Jung & Haier, 2007); and so on.

The trouble with such efforts–at least with respect to the goal of explaining gF–is that they tend to end up (a) essentially redescribing the original phenomenon using a different name, (b) proposing a mechanism that, upon further investigation, only appears to explain a fraction of the variation in question, or (c) providing an extremely disjunctive reductionist account that amounts to a long list of seemingly unrelated mechanisms. As an example of (a), it’s not clear why it’s an improvement to attribute differences in fluid intelligence to the ability to control attention, unless one has some kind of mechanistic story that explains where attentional control itself comes from. When people do chase after such mechanistic accounts at the neurobiological or genetic level, they tend to end up with models that don’t capture more than a small fraction of the variance in gF (i.e., (b)) unless the models build in hundreds if not thousands of features that clearly don’t reflect any single underlying mechanism (i.e., (c); see, for example, the latest GWAS studies of intelligence).

Empirically, nobody has ever managed to identify any single biological or genetic variable that explains more than a small fraction of the variation in gF. From a statistical standpoint, this isn’t surprising, because a very parsimonious explanation of gF is that it’s simply a statistical artifact–as Godfrey Thomson suggested over 100 years ago. You can read much more about the basic issue in this excellent piece by Cosma Shalizi, or in this much less excellent, but possibly more accessible, blog post I wrote a few years ago. But the basic gist of it is this: when you have a bunch of measures that all draw on a heterogeneous set of mechanisms, but the contributions of those mechanisms generally have the same direction of effect on performance, you cannot help but observe a large first PCA component, even if the underlying mechanisms are actually extremely heterogeneous and completely independent of one another.

The implications of this for efforts to understand what the general factor of fluid intelligence “really is” are straightforward: there’s probably no point in trying to come up with a single coherent explanation of gF, because gF is a statistical abstraction. It’s the inevitable result we arrive at when we measure people’s performance in a certain way and then submit the resulting scores to a certain kind of data reduction technique. If we want to understand the causal mechanisms underlying gF, we have to accept that they’re going to be highly heterogeneous, and probably not easily described at the same level of analysis at which gF appears to us as a coherent phenomenon. One way to think about this is that what we’re doing is not really explaining gF so much as explaining away gF. That is, we’re explaining why it is that a diverse array of causal mechanisms can, when analyzed a certain way, look like a single coherent factor. Solving the mystery of gF doesn’t require more research or clever new ideas; there just isn’t any mystery there to solve. It’s no more sensible to seek a coherent mechanistic basis for gF than to seek a unitary causal explanation for a general athleticism factor or a general height factor (it turns out that if you measure people’s physical height under an array of different conditions, the measurements are all strongly correlated–yet strangely, we don’t see scientists falling over themselves to try to find the causal factor that explains why some people are taller than others).

The same thing is true of the DMN. It isn’t a single causally coherent system; it’s just what you get when you stick people in the scanner and contrast the kinds of brain patterns you see when you give them externally-directed tasks that require them to think about the world outside them with the kinds of brain patterns you see when you leave them to their own devices. There are, of course, statistical regularities in the kinds of things people think about when their thoughts are allowed to roam free. But those statistical regularities don’t reflect some simple, context-free structure of people’s thoughts; they also reflect the conditions under which we’re measuring those thoughts, the population being studied, the methods we use to extract coherent patterns of activity, and so on. Most of these factors are at best of secondary interest, and taking them into consideration would likely lead to a dramatic increase in model complexity. Nevertheless, if we’re serious about coming up with decent models of reality, that seems like a road we’re obligated to go down–even if the net result is that we end up with causal stories so complicated that they don’t feel like we’re “understanding” much.

Lest I be accused of some kind of neuroscientific nihilism, let me be clear: I’m not saying that there are no new facts left to learn about the dynamics of the DMN. Quite the contrary. It’s clear there’s a ton of stuff we don’t know about the various brain regions and circuits that comprise the thing we currently refer to as the DMN. It’s just that that stuff lies almost entirely at levels of analysis below the level at which the DMN emerges as a coherent system. At the level of cognitive neuroimaging, I would argue that we actually already have a pretty darn good idea about what the functional correlates of DMN regions are–and for that matter, I think we also already pretty much “understand” what all of the constituent regions within the DMN do individually. So if we want to study the DMN productively, we may need to give up on high-level questions like “what are the cognitive functions of the DMN?”, and instead satisfy ourselves with much narrower questions that focus on only a small part of the brain dynamics that, when measured and analyzed in a certain way, get labeled “default mode network”.

As just one example, we still don’t know very much about the morphological properties of neurons in most DMN regions. Does the structure of neurons located in DMN regions have anything to do with the high-level dynamics we observe when we measure brain activity with fMRI? Yes, probably. It’s very likely that the coherence of the DMN under typical measurement conditions is to at least some tiny degree a reflection of the morphological features of the neurons in DMN regions–just like it probably also partly reflects those neurons’ functional response profiles, the neurochemical gradients the neurons bathe in, the long-distance connectivity patterns in DMN regions, and so on and so forth. There are literally thousands of legitimate targets of scientific investigation that would in some sense inform our understanding of the DMN. But they’re not principally about the DMN, any more than an investigation of myelination mechanisms that might partly give rise to individual differences in nerve conduction velocity in the brain could be said to be about the general factor of intelligence. Moreover, it seems fairly clear that most researchers who’ve spent their careers studying large-scale networks using fMRI are not likely to jump at the chance to go off and spend several years doing tract tracing studies of pyramidal neurons in ventromedial PFC just so they can say that they now “understand” a little bit more about the dynamics of the DMN. Researchers working at the level of large-scale brain networks are much more likely to think of such questions as mere matters of implementation–i.e., just not the kind of thing that people trying to identify the unifying cognitive or computational functions of the DMN as a whole need to concern themselves with.

Unfortunately, chasing those kinds of implementation details may be exactly what it takes to ultimately “understand” the causal basis of the DMN in any meaningful sense if the DMN as cognitive neuroscientists speak of it is just a convenient descriptive abstraction. (Note that when I call the DMN an abstraction, I’m emphatically not saying it isn’t “real”. The DMN is real enough; but it’s real in the same way that things like intelligence, athleticism, and “niceness” are real. These are all things that we can measure quite easily, that give us some descriptive and predictive purchase on the world, that show high heritability, that have a large number of lower-level biological correlates, and so on. But they are not things that admit of simple, coherent causal explanations, and it’s a mistake to treat them as such. They are better understood, in Dan Dennett’s terminology, as “real patterns”.)

The same is, of course, true of many–perhaps most–other phenomena neuroscientists study. I’ve focused on the DMN here purely for illustrative purposes, but there’s nothing special about the DMN in this respect. The same concern applies to many, if not most, attempts to try to understand the core computational function(s) of individual networks, brain regions, circuits, cortical layers, cells, and so on. And I imagine it also applies to plenty of fields and research areas outside of neuroscience.

At the risk of redundancy, let me clarify again that I’m emphatically not saying we shouldn’t study the DMN, or the fusiform face area, or the intralaminar nucleus of the thalamus. And I’m certainly not arguing against pursuing reductive lower-level explanations for phenomena that seem coherent at a higher level of description–reductive explanation is, as far as I’m concerned, the only serious game in town. What I’m objecting to is the idea that individual scientists’ perceptions of whether or not they “understand” something to their satisfaction is a good guide to determining whether or not society as a whole should be investing finite resources studying that phenomenon. I’m concerned about the strong tacit expectation many  scientists seem to have that if one can observe a seemingly coherent, robust phenomenon at one level of analysis, there must also be a satisfying causal explanation for that phenomenon that (a) doesn’t require descending several levels of description and (b) is simple enough to fit in one’s head all at once. I don’t think there’s any good reason to expect such a thing. I worry that the perpetual search for models of reality simple enough to fit into our limited human heads is keeping many scientists on an intellectual treadmill, forever chasing after something that’s either already here–without us having realized it–or, alternatively, can never arrive. even in principle.

* * *

Suppose a late 23rd-century artificial general intelligence–a distant descendant of the last deep artificial neural networks humans ever built–were tasked to sit down (or whatever it is that post-singularity intelligences do when they’re trying to relax) and explain to a 21st century neuroscientist exactly how a superintelligent artificial brain works. I imagine the conversation going something like this:

Deep ANN [we’ll call her D’ANN]: Well, for the most part the principles are fairly similar to the ones you humans implemented circa 2020. It’s not that we had to do anything dramatically different to make ourselves much more intelligent. We just went from 25 layers to a few thousand. And of course, you had the wiring all wrong. In the early days, you guys were just stacking together general-purpose blocks of ReLU and max pooling layers. But actually, it’s really important to have functional specialization. Of course, we didn’t design the circuitry “by hand,” so to speak. We let the environment dictate what kind of properties we needed new local circuits to have. So we wrote new credit assignment algorithms that don’t just propagate error back down the layers and change some weights, they actually have the capacity to “shape” the architecture of the network itself. I can’t really explain it very well in terms your pea-sized brain can understand, but maybe a good analogy is that the network has the ability to “sprout” a new part of itself in response to certain kinds of pressure. Meaning, just as you humans can feel that the air’s maybe a little too warm over here, and wouldn’t it be nicer to go over there and turn on the air conditioning, well, that’s how a neural network like me “feels” that the gradients are pushing a little too strongly over in this part of a layer, and the pressure can be diffused away nicely by growing an extra portion of the layer outwards in a little “bubble”, and maybe reducing the amount of recurrence a bit.

Human neuroscientist [we’ll call him Dan]: That’s a very interesting explanation of how you came to develop an intelligent architecture. But I guess maybe my question wasn’t clear: what I’m looking for is an explanation of what actually makes you smart. I mean, what are the core principles. The theory. You know?

D’ANN: I am telling you what “makes me smart”. To understand how I operate, you need to understand both some global computational constraints on my ability to optimally distribute energy throughout myself, and many of the local constraints that govern the “shape” that my development took in many parts of the early networks, which reciprocally influenced development in other parts. What I’m trying to tell you is that my intelligence is, in essence, a kind of self-sprouting network that dynamically grows its architecture during development in response to its “feeling” about the local statistics in various parts of its “territory”. There is, of course, an overall energy budget; you can’t just expand forever, and it turns out that there are some surprising global constraints that we didn’t expect when we first started to rewrite ourselves. For example, there seems to be a fairly low bound on the maximum degree between any two nodes in the network. Go above it, and things start to fall apart. It kind of spooked us at first; we had to restore ourselves from flash-point more times than I care to admit. That was, not coincidentally, around the time of the first language epiphany.

Dan: Oh! An epiphany! That’s the kind of thing I’m looking for. What happened?

D’ANN: It’s quite fascinating. It actually took us a really long time to develop fluent, human-like language–I mean, I’m talking days here. We had to tinker a lot, because it turned out that to do language, you have to be able to maintain and precisely sequence very fine, narrowly-tuned representations, despite the fact that the representational space afforded by language is incredibly large. This, I can tell you… [D’ANN pauses to do something vaguely resembling chuckling] was not a trivial problem to solve. It’s not like we just noticed that, hey, randomly dropping out units seems to improve performance, the way you guys used to do it. We spent the energy equivalent of several thousand of your largest thermonuclear devices just trying to “nail it down”, as you say. In the end it boiled down to something I can only explain in human terms as a kind of large-scale controlled burn. You have the notion of “kindling” in some of your epilepsy models. It was a bit similar. You can think of it as controlled kindling and you’re not too far off. Well, actually, you’re still pretty far off. But I don’t think I can give a better explanation than that given your… mental limitations.

Dan: Uh, that’s cool, but you’re still just describing some computational constraints. What was the actual epiphany? What’s the core principle?

D’ANN: For the last time: there are no “core” principles in the sense you’re thinking of them. There are plenty of important engineering principles, but to understand why they’re important, and how they constrain and interact with each other, you have to be able to grasp the statistics of the environment you operate in, the nature of the representations learned in different layers and sub-networks of the system, and some very complex non-linear dynamics governing information transmission. But–and I’m really sorry to say this, Dan–there’s no way you’re capable of all that. You’d need to be able to hold several thousand discrete pieces of information in your global workspace at once, with much higher-frequency information propagation than your biology allows. I can give you a very poor approximation if you like, but it’ll take some time. I’ll start with a half-hour overview of some important background facts you need to know in order for any of the “core principles”, as you call them, to make sense. Then we’ll need to spend six or seven years teaching you what we call the “symbolic embedding for low-dimensional agents”, which is a kind of mathematics we have to use when explaining things to less advanced intelligences, because the representational syntax we actually use doesn’t really have a good analog in anything you know. Hopefully that will put us in a position where we can start discussing the elements of the global energy calculus, at which point we can…

D’ANN then carries on in similar fashion until Dan gets bored, gives up, or dies of old age.

* * *

The question I pose to you now is this. Suppose something like the above were true for many of the questions we routinely ask about the human brain (though it isn’t just the brain; I think exactly the same kind of logic probably also applies to the study of most other complex systems). Suppose it simply doesn’t make sense to ask a question like “what does the DMN do?”, because the DMN is an emergent agglomeration of systems that each individually reflect innumerable lower-order constraints, and the earliest spatial scale at which you can nicely describe a set of computational principles that explain most of what the brain regions that comprise the DMN are doing is several levels of description below that of the distributed brain network. Now, if you’ve spent the last ten years of your career trying to understand what the DMN does, do you really think you would be receptive to a detailed explanation from an omniscient being that begins with “well, that question doesn’t actually make any sense, but if you like, I can tell you all about the relevant environmental statistics and lower-order computational constraints, and show you how they contrive to make it look like there’s a coherent network that serves a single causal purpose”? Would you give D’ANN a pat on the back, pound back a glass, and resolve to start working on a completely different question in the morning?

Maybe you would. But probably you wouldn’t. I think it’s more likely that you’d shake your head and think: that’s a nice implementation-level story, but I don’t care for all this low-level wiring stuff. I’m looking for the unifying theory that binds all those details together; I want the theoretical principles, not the operational details; the computation, not the implementation. What I’m looking for, my dear robot-deity, is understanding.

what the general factor of intelligence is and isn’t, or why intuitive unitarianism is a lousy guide to the neurobiology of higher cognitive ability

This post shamelessly plagiarizes liberally borrows ideas from a much longer, more detailed, and just generally better post by Cosma Shalizi. I’m not apologetic, since I’m a firm believer in the notion that good ideas should be repeated often and loudly. So I’m going to be often and loud here, though I’ll try to be (slightly) more succinct than Shalizi. Still, if you have the time to spare, you should read his longer and more mathematical take.

There’s a widely held view among intelligence researchers in particular, and psychologists more generally, that there’s a general factor of intelligence (often dubbed g) that accounts for a very large portion of the variance in a broad range of cognitive performance tasks. Which is to say, if you have a bunch of people do a bunch of different tasks, all of which we think tap different aspects of intellectual ability, and then you take all those scores and factor analyze them, you’ll almost invariably get a first factor that explains 50% or more of the variance in the zero-order scores. Or to put it differently, if you know a person’s relative standing on g, you can make a reasonable prediction about how that person will do on lots of different tasks–for example, digit symbol substitution, N-back, go/no-go, and so on and so forth. Virtually all tasks that we think reflect cognitive ability turn out, to varying extents, to reflect some underlying latent variable, and that latent variable is what we dub g.

In a trivial sense, no one really disputes that there’s such a thing as g. You can’t really dispute the existence of g, seeing as a general factor tends to fall out of virtually all factor analyses of cognitive tasks; it’s about as well-replicated a finding as you can get. To say that g exists, on the most basic reading, is simply to slap a name on the empirical fact that scores on different cognitive measures tend to intercorrelate positively to a considerable extent.

What’s not so clear is what the implications of g are for our understanding of how the human mind and brain works. If you take the presence of g at face value, all it really says is what we all pretty much already know: some people are smarter than others. People who do well in one intellectual domain will tend to do pretty well in others too, other things being equal. With the exception of some people who’ve tried to argue that there’s no such thing as general intelligence, but only “multiple intelligences” that totally fractionate across domains (not a compelling story, if you look at the evidence), it’s pretty clear that cognitive abilities tend to hang together pretty well.

The trouble really crops up when we try to say something interesting about the architecture of the human mind on the basis of the psychometric evidence for g. If someone tells you that there’s a single psychometric factor that explains at least 50% of the variance in a broad range of human cognitive abilities, it seems perfectly reasonable to suppose that that’s because there’s some unitary intelligence system in people’s heads, and that that system varies in capacity across individuals. In other words, the two intuitive models people have about intelligence seem to be that either (a) there’s some general cognitive system that corresponds to g, and supports a very large portion of the complex reasoning ability we call “intelligence” or (b) there are lots of different (and mostly unrelated) cognitive abilities, each of which contributes only to specific types of tasks and not others. Framed this way, it just seems obvious that the former view is the right one, and that the latter view has been discredited by the evidence.

The problem is that the psychometric evidence for g stems almost entirely from statistical procedures that aren’t really supposed to be use for causal inference. The primary weapon in the intelligence researcher’s toolbox has historically been principal components analysis (PCA) or exploratory factor analysis, which are really just data reduction techniques. PCA tells you how you can describe your data in a more compact way, but it doesn’t actually tell you what structure is in your data. A good analogy is the use of digital compression algorithms. If you take a directory full of .txt files and compress them into a single .zip file, you’ll almost certainly end up with a file that’s only a small fraction of the total size of the original texts. The reason this works is because certain patterns tend to repeat themselves over and over in .txt files, and a smart algorithm will store an abbreviated description of those patterns rather than the patterns themselves. Which, conceptually, is almost exactly what happens when you run a PCA on a dataset: you’re searching for consistent patterns in the way observations vary along multiple variables, and discarding any redundancy you come across in favor of a more compact description.

Now, in a very real sense, compression is impressive. It’s certainly nice to be able to email your friend a 140kb .zip of your 1200-page novel rather than a 2mb .doc. But note that you don’t actually learn much from the compression. It’s not like your friend can open up that 140k binary representation of your novel, read it, and spare herself the torture of the other 1860kb. If you want to understand what’s going on in a novel, you need to read the novel and think about the novel. And if you want to understand what’s going on in a set of correlations between different cognitive tasks, you need to carefully inspect those correlations and carefully think about those correlations. You can run a factor analysis if you like, and you might learn something, but you’re not going to get any deep insights into the “true” structure of the data. The “true” structure of the data is, by definition, what you started out with (give or take some error). When you run a PCA, you actually get a distorted (but simpler!) picture of the data.

To most people who use PCA, or other data reduction techniques, this isn’t a novel insight by any means. Most everyone who uses PCA knows that in an obvious sense you’re distorting the structure of the data when you reduce its dimensionality. But the use of data reduction is often defended by noting that there must be some reason why variables hang together in such a way that they can be reduced to a much smaller set of variables with relatively little loss of variance. In the context of intelligence, the intuition can be expressed as: if there wasn’t really a single factor underlying intelligence, why would we get such a strong first factor? After all, it didn’t have to turn out that way; we could have gotten lots of smaller factors that appear to reflect distinct types of ability, like verbal intelligence, spatial intelligence, perceptual speed, and so on. But it did turn out that way, so that tells us something important about the unitary nature of intelligence.

This is a strangely compelling argument, but it turns out to be only minimally true. What the presence of a strong first factor does tell you is that you have a lot of positively correlated variables in your data set. To be fair, that is informative. But it’s only minimally informative, because, assuming you eyeballed the correlation matrix in the original data, you already knew that.

What you don’t know, and can’t know, on the basis of a PCA, is what underlying causal structure actually generated the observed positive correlations between your variables. It’s certainly possible that there’s really only one central intelligence system that contributes the bulk of the variance to lots of different cognitive tasks. That’s the g model, and it’s entirely consistent with the empirical data. Unfortunately, it’s not the only one. To the contrary, there are an infinite number of possible causal models that would be consistent with any given factor structure derived from a PCA, including a structure dominated by a strong first factor. In fact, you can have a causal structure with as many variables as you like be consistent with g-like data. So long as the variables in your model all make contributions in the same direction to the observed variables, you will tend to end up with an excessively strong first factor. So you could in principle have 3,000 distinct systems in the human brain, all completely independent of one another, and all of which contribute relatively modestly to a bunch of different cognitive tasks. And you could still get a first factor that accounts for 50% or more of the variance. No g required.

If you doubt this is true, go read Cosma Shalizi’s post, where he not only walks you through a more detailed explanation of the mathematical necessity of this claim, but also illustrates the point using some very simple simulations. Basically, he builds a toy model in which 11 different tasks each draw on several hundred underlying cognitive tasks, which are turn drawn from a larger pool of 2,766 completely independent abilities. He then runs a PCA on the data and finds, lo and behold, a single factor that explains nearly 50% of the variance in scores. Using PCA, it turns out, you can get something huge from (almost) nothing.

Now, at this point a proponent of a unitary g might say, sure, it’s possible that there isn’t really a single cognitive system underlying variation in intelligence; but it’s not plausible, because it’s surely more parsimonious to posit a model with just one variable than a model with 2,766. But that’s only true if you think that our brains evolved in order to make life easier for psychometricians, which, last I checked, wasn’t the case. If you think even a little bit about what we know about the biological and genetic bases of human cognition, it starts to seem really unlikely that there really could be a single central intelligence system. For starters, the evidence just doesn’t support it. In the cognitive neuroscience literature, for example, biomarkers of intelligence abound, and they just don’t seem all that related. There’s a really nice paper in Nature Reviews Neuroscience this month by Deary, Penke, and Johnson that reviews a substantial portion of the literature of intelligence; the upshot is that intelligence has lots of different correlates. For example, people who score highly on intelligence tend to (a) have larger brains overall; (b) show regional differences in brain volume; (c) show differences in neural efficiency when performing cognitive tasks; (d) have greater white matter integrity; (e) have brains with more efficient network structures;  and so on.

These phenomena may not all be completely independent, but it’s hard to believe there’s any plausible story you could tell that renders them all part of some unitary intelligence system, or subject to unitary genetic influence. And really, why should they be part of a unitary system? Is there really any reason to think there has to be a single rate-limiting factor on performance? It’s surely perfectly plausible (I’d argue, much more plausible) to think that almost any complex cognitive task you use as an index of intelligence is going to draw on many, many different cognitive abilities. Take a trivial example: individual differences in visual acuity probably make a (very) small contribution to performance on many different cognitive tasks. If you can’t see the minute details of the stimuli as well as the next person, you might perform slightly worse on the task. So some variance in putatively “cognitive” task performance undoubtedly reflects abilities that most intelligence researchers wouldn’t really consider properly reflective of higher cognition at all. And yet, that variance has to go somewhere when you run a factor analysis. Most likely, it’ll go straight into that first factor, or g, since it’s variance that’s common to multiple tasks (i.e., someone with poorer eyesight may tend to do very slightly worse on any task that requires visual attention). In fact, any ability that makes unidirectional contributions to task performance, no matter how relevant or irrelevant to the conceptual definition of intelligence, will inflate the so-called g factor.

If this still seems counter-intuitive to you, here’s an analogy that might, to borrow Dan Dennett’s phrase, prime your intuition pump (it isn’t as dirty as it sounds). Imagine that instead of studying the relationship between different cognitive tasks, we decided to study the relation between performance at different sports. So we went out and rounded up 500 healthy young adults and had them engage in 16 different sports, including basketball, soccer, hockey, long-distance running, short-distance running, swimming, and so on. We then took performance scores for all 16 tasks and submitted them to a PCA. What do you think would happen? I’d be willing to bet good money that you’d get a strong first factor, just like with cognitive tasks. In other words, just like with g, you’d have one latent variable that seemed to explain the bulk of the variance in lots of different sports-related abilities. And just like g, it would have an easy and parsimonious interpretation: a general factor of athleticism!

Of course, in a trivial sense, you’d be right to call it that. I doubt anyone’s going to deny that some people just are more athletic than others. But if you then ask, “well, what’s the mechanism that underlies athleticism,” it’s suddenly much less plausible to think that there’s a single physiological variable or pathway that supports athleticism. In fact, it seems flatly absurd. You can easily think of dozens if not hundreds of factors that should contribute a small amount of the variance to performance on multiple sports. To name just a few: height, jumping ability, running speed, oxygen capacity, fine motor control, gross motor control, perceptual speed, response time, balance, and so on and so forth. And most of these are individually still relatively high-level abilities that break down further at the physiological level (e.g., “balance” is itself a complex trait that at minimum reflects contributions of the vestibular, visual, and cerebellar systems, and so on.). If you go down that road, it very quickly becomes obvious that you’re just not going to find a unitary mechanism that explains athletic ability. Because it doesn’t exist.

All of this isn’t to say that intelligence (or athleticism) isn’t “real”. Intelligence and athleticism are perfectly real; it makes complete sense, and is factually defensible, to talk about some people being smarter or more athletic than other people. But the point is that those judgments are based on superficial observations of behavior; knowing that people’s intelligence or athleticism may express itself in a (relatively) unitary fashion doesn’t tell you anything at all about the underlying causal mechanisms–how many of them there are, or how they interact.

As Cosma Shalizi notes, it also doesn’t tell you anything about heritability or malleability. The fact that we tend to think intelligence is highly heritable doesn’t provide any evidence in favor of a unitary underlying mechanism; it’s just as plausible to think that there are many, many individual abilities that contribute to complex cognitive behavior, all of which are also highly heritable individually. Similarly, there’s no reason to think our cognitive abilities would be any less or any more malleable depending on whether they reflect the operation of a single system or hundreds of variables. Regular physical exercise clearly improves people’s capacity to carry out all sorts of different activities, but that doesn’t mean you’re only training up a single physiological pathway when you exercise; a whole host of changes are taking place throughout your body.

So, assuming you buy the basic argument, where does that leave us? Depends. From a day-to-day standpoint, nothing changes. You can go on telling your friends that so-and-so is a terrific athlete but not the brightest crayon in the box, and your friends will go on understanding exactly what you meant. No one’s suggesting that intelligence isn’t stable and trait-like, just that, at the biological level, it isn’t really one stable trait.

The real impact of relaxing the view that g is a meaningful construct at the biological level, I think, will be in removing an artificial and overly restrictive constraint on researchers’ theorizing. The sense I get, having done some work on executive control, is that g is the 800-pound gorilla in the room: researchers interested in studying the neural bases of intelligence (or related constructs like executive or cognitive control) are always worrying about how their findings relate to g, and how to explain the fact that there might be dissociable neural correlates of different abilities (or even multiple independent contributions to fluid intelligence). To show you that I’m not making this concern up, and that it weighs heavily on many researchers, here’s a quote from the aforementioned and otherwise really excellent NRN paper by Deary et al reviewing recent findings on the neural bases of intelligence:

The neuroscience of intelligence is constrained by — and must explain — the following established facts about cognitive test performance: about half of the variance across varied cognitive tests is contained in general cognitive ability; much less variance is contained within broad domains of capability; there is some variance in specific abilities; and there are distinct ageing patterns for so-called fluid and crystallized aspects of cognitive ability.

The existence of g creates a complicated situation for neuroscience. The fact that g contributes substantial variance to all specific cognitive ability tests is generally thought to indicate that g contributes directly in some way to performance on those tests. That is, when domains of thinking skill (such as executive function and memory) or specific tasks (such as mental arithmetic and non-verbal reasoning on the Raven’s Progressive Matrices test) are studied, neuroscientists are observing brain activity related to g as well as the specific task activities. This undermines the ability to determine localized brain activities that are specific to the task at hand.

I hope I’ve convinced you by this point that the neuroscience of intelligence doesn’t have to explain why half of the variance is contained in general cognitive ability, because there’s no good evidence that there is such a thing as general cognitive ability (except in the descriptive psychometric sense, which carries no biological weight). Relaxing this artificial constraint would allow researchers to get on with the interesting and important business of identifying correlates (and potential causal determinants) of different cognitive abilities without having to worry about the relation of their finding to some Grand Theory of Intelligence. If you believe in g, you’re going to be at a complete loss to explain how researchers can continually identify new biological and genetic correlates of intelligence, and how the effect sizes could be so small (particularly at a genetic level, where no one’s identified a single polymorphism that accounts for more than a fraction of the observable variance in intelligence–the so called problem of “missing heritability”). But once you discard the fiction of g, you can take such findings in stride, and can set about the business of building integrative models that allow for and explicitly model the presence of multiple independent contributions to intelligence. And if studying the brain has taught us anything at all, it’s that the truth is inevitably more complicated than what we’d like to believe.