Here is my blog post veterinary requirements education (Newton)

]]>Human psychophysics and monkey recording studies traditionally have too few subjects per study to attempt generalisation to the population. An inference to the population then cannot be justified on the basis of the data. But we may make that inference based on the *assumption* that what holds for the group studied holds for the population. The latter assumption (often implicit) may be justified for certain questions. It is the reason why psychophycisists and monkey electrophysiologists care about their peers’ results.

So I would argue that it is *incorrect* to say that type-1 error is inflated in fixed-effects analyses, because “type-1 error” in this statement refers to a hypothesis test that hasn’t been attempted (i.e. a population-level hypothesis test).

The question then becomes whether the interpretation in the papers goes beyond the animals studied, without the proper caveat that this inference is not supported by the statistics (but requires the prior belief that what goes for this group holds in general).

]]>“So even if we are willing to rely on hierarchical ordering and posit that within-subject variance is greater than between-subject variance (implying intraclass correlation is less than 0.5), it still might not be true that within-subject variance is greater than between-subject variance + subject-by-condition interaction variance.” ]]>

“So even if we are willing to rely on hierarchical ordering and posit that within-subject variance > between-subject variance (implying intraclass correlation between-subject variance + subject-by-condition interaction variance.” ]]>

http://stats.stackexchange.com/questions/72819/relative-variances-of-higher-order-vs-lower-order-random-terms-in-mixed-models

There are (at least) two complications in applying the hierarchical ordering principle. First, it is not clear where in the hierarchy we should consider residual variance to be. (Note that the “within-subject variance” in this case = residual variance.) Are the errors the lowest-order effects or the highest-order effects? I usually consider them to be the lowest-order effects, but that’s just because empirically it seems that residual variance is often the largest variance component in the kinds of studies that I personally have experience with. It’s not clear whether we should expect this to be true in other areas. Second, different study designs confound sources of variation in different ways. In the design considered by Aarts et al., if the “true” or data-generating regression process does in fact include random slope variance (e.g., some rats “would have been” high responders in the other condition, had we observed them in both conditions, while for other rats the reverse is true), then this unobserved variation will be absorbed into the random intercept variance. So even if we are willing to rely on hierarchical ordering and posit that within-subject variance > between-subject variance (implying intraclass correlation between-subject variance + subject-by-condition interaction variance. So we could still be wrong about the variance components even if hierarchical ordering is actually correct, just because of the study design.

All of that is just to say that a priori statements are problematic, we need data, but the data don’t seem to exist in any systematic form (no one seems to be interested in meta-analyzing random variance components).

]]>I’m guessing your answer is going to be that it totally depends on the design but I wanted to see if you had any idea of these distributions generally speaking.

]]>