Thursday, March 13, 2014
Credible Claims in Statistical Analyses
You have a data set. Let us for the moment claim that there is no measurement error in each measurement or data point.
1. You might find the mean and the standard deviation of some measurement. It's likely that the standard error of the mean is small if the N is large. But it is hard for me to believe that the mean +- SE should be taken too seriously if the standard deviation is substantial. That is, the location of the mean may be statistically sharp, but given the substantial dispersion of the measurements, I would be surprised if I should bet on more than two significant figures, often just one. Put differently, you measure 6+-2, and the SE is .02. I would find it hard to distinguish 6 from perhaps 7, given the spread of the measured values. That is another measurement might have gotten 7+-2, with similarly small SE. I would not believe 6 is different from 7 in this context.
2. When I say that I would not believe, what I am saying is there is enough noise, non gaussian intermixture, junk in the data so that the spread given by the SD prevents me from making vary sharp claims about the difference of two different means.
3. You do a variety of statistical studies. Ahead of time, before you do the studies, you might estimate what you think the statistics will be. The mean, the SD, the regression coefficient. Roughly estimate them. Maybe only relative sizes. Maybe there is previous research that gives you a decent idea.
Then do your studies. Are you surprised by any of the statistics that come out. Keep in mind that it is hard to believe more than two significant figures, often one, whatever the statistical error.
4. You are trying to measure the effect of an intervention and the like. I suspect that any effect smaller than 1% is not credible, again whatever its statistical error. Maybe 10%, maybe 30%. Your problem is that typically you might account for a fraction of the R-squared, and you have to assure yourself that the rest is truly random noise or randomized by your research design. A small amount of impurity or contamination in the data will be problematic.
5. Whatever you measure, can you think of a mechanism that would lead to the number, roughly, that you measure. This is after you have done your analysis. Before, see #3 above. Could you eliminate a range of mechanisms by your statistical work?
6. If you make claims about a discount rate, say, why should I believe your claim? Have you done sensitivity analyses with different rates, to see if your conclusions are robust? And how many years ahead would you want to use such a notion as discount rate and believe it is credible in reflecting our attitude about the more distant future?
7. In the natural sciences, measurements almost always are about actual objects and their properties, say their mass, their energy, etc. Usually those properties are connected to measurements of other properties and perhaps to theories that predict values or connect the value of one property to that of another. In the social sciences, that is rarely the case, as far as I can tell. And you believe you could have, in some particular measurements, many significant figures (high accuracy). I don't see such a belief in social science.