Most readers will have at least heard of the phrase “margin of error” (MoE). Those of us who work in the field of opinion research (my name for what is usually called “marketing research” or, in the government, “public opinion research [POR]”) have often agonized over this concept – what it actually means and how best to communicate it. I refined the margin of error reference that I wrote into my research reports over a 25-year career, ending up with something like: “The maximum margin of sampling error for this study (that is, the error attributable to interviewing a sample of the population rather than the entire population) is + 4.9%, 19 times in 20. Margins of error for subsamples of the total sample will be larger.”

Buried in this bit of gobbledygook are a number of very important concepts, which are often expressed in different ways (I make NO claim that what I used is best) and given different levels of emphasis. Three key concepts are represented by the words/phrases “maximum,” “sampling error” and “19 times in 20.”

Sampling error is the only one of these three key concepts to be explicitly defined in the MoE reference as written above. The reference defines this error as the error resulting from interviewing a sample, rather than the population. The underlying idea is that we interviewed a sample because it was impractical (for cost and other reasons) to interview the whole population, and there is an “accuracy cost” to doing this.

The word “maximum” is more arcane. It is there because the exact margin of error for any given finding in the study depends on the true proportion for that measure that exists in the population. To illustrate, image a survey that reports that 23% of the surveyed group believes X. Later in the survey, we find that 45% supports proposal Y. The MoE for these two observations differs. The calculation is: **MoE = 1.96 * SQRT((p * (1-p))/N)**, where N is the sample size, 1.96 represents the multiplier for a 95% confidence interval (“19 times in 20”) and p is the proportion for a given measure that exists in the population. As we usually don’t know this proportion (why would we need to do a survey if we did?), we estimate it based on the survey proportion. I don’t want to turn this post into a stats class, so we won’t do these calculations here. If you do them, you will see that the MoE for the 45% finding is higher than that for the 23%. If you were to do them for every possible proportion from .01 to .99, you would see that the highest MoE is when p=.5 (or 50%). When a single MoE is cited in a research report, it is usually this one (as it is the most conservative), and it is cited (or should be) as the “maximum” MoE.

The final phrase of interest in the MoE reference is “19 times in 20.” As I noted briefly above, this represents the 95% confidence interval (or CI). But why 95% for the CI? Why not 90%, or 80%, or 99%. Well, there is no real reason. This is an arbitrary standard. It makes sense and seems logical, but there is no “mathematical law” that states it has to be 95%. Similarly, in significance testing (e.g., testing the likelihood that the difference between two observations is due to chance), if the p-value calculated is less than .05 (i.e, the chance that the difference being tested does NOT reflect an underlying difference is less than 5%), then we conclude that the difference is real (or “statistically significant”). Who says so? Well, a guy named R.A. Fisher does (or, at least, did – he died in 1962).

(I should note that, while much of the analysis that opinion researchers do is based on Fisher’s work, this does NOT mean that the interpretation of these analyses is cut-and-dried. Far from it. For a fascinating discussion of the perils of Fisher-type statistical inference, see here. No, go there now – this article is a must-read. I’ll be here when you get back. Suffice it to say that erroneous conclusions can easily be drawn, even with the best of intentions, from “statistically significant” findings.)

I have found in my career that the “19 times in 20” CI is one of the most misunderstood concepts we communicate in opinion research reports. Some take it as a measure of certainty – “I am 95% sure this number is correct!”; others take it as a cop-out – “If you’re wrong, you’ll say this was the one study in 20…” Still others assume all outcomes within the stated MoE are equally likely, hence the talk of “statistical ties” that you often see in the reporting of political polls. (In fact, the survey observation is the most likely outcome within the MoE.) Adding to the confusion is the fact that the CI is explained in a number of ways in MoE statements. The one that seemed to make the most sense to my clients was this: “When we report a survey finding, the range identified by the percentage we found, +X%, will include the true population proportion in 19 of 20 samples.” In other words, there is a one-in-20 chance that the actual population proportion will lie outside this range.

So you can see that there is a lot of meaning (and potential for confusion) hidden within the short MoE statement. Given that this statement is often the ONLY statement in a report that in any way constrains the conclusions reached, I think it is in the best interests of users of research to make sure they fully understand what it means (and what it doesn’t mean).