Kerry's Sanctum Sanctorum

  • Home
  • About

Tag Archives: marketing research

Marketing research: On the gap between traditional and next-gen

19th November, 2011 · Kerry Butt · 1 Comment

Last Thursday afternoon, I sat in a room with some (actually far too few) other people from around the Government of Canada to watch a webinar organized by MRIA (the Marketing Research and Intelligence Association, my professional association), on the subject of mobile research. I really never intended to write too much about my professional life on this blog, but this webinar really got me thinking. Actually, it was one comment from one viewer that got me thinking.

The webinar featured six speakers from five companies active in the mobile research area (i.e., research conducted using mobile phones, apps, etc.) The presentations ranged from the blah (the presentation from Qualvu’s John Williamson was basically just a sales pitch) to the extremely interesting (Jason Cyr of Tiipz [who looks about 16 in his photo] gave a fascinating presentation on the use of mobile research to gather real-time data at events  like music festivals).

But, as I mentioned, it was one question that really got me thinking. That question was, in part, about validity and the fact that none of the presenters mentioned it. Well, when this question was read out by the webinar host, you could almost hear the groans from the presenters. That’s because validity, more than perhaps any other issue, divides practitioners of what is called “next-gen” marketing research from traditional market researchers. Much has been written and said about this issue, much of it in blogs and on social media platforms. In a nutshell, traditional market researchers believe that next-gen research does not have a “scientific” foundation and, thus, its findings cannot be tied to the population as a whole with any precision. On the other hand, next-gen researchers believe traditional market research is mired in “old thinking” and just doesn’t reach large groups of  people (and those people, mostly the young, are of greatest interest to many of the companies that are buying market research services).

Of course, the validity issue strikes to the heart of this divide. Proponents of traditional market research talk about “representativeness” and “probability samples” as the theoretical underpinnings of the whole market research field (at least the quantitative side of it). These underpinnings allow the results of marketing research to be generalized to the population as a whole with a known degree of confidence (see my recent post on “margin of error”). For traditional researchers, if you don’t have this, you have nothing.

On the other hand, next-gen market researchers are often dismissive of these traditional sacred cows. In response to the validity question at the MRIA webinar, one of the presenters said “just get over” not using probability samples. (This echoes the view of another of the younger generation of market researchers, who said of representative samples “forget it, we never had it.”) “Gamification”, social media “listening” and “scraping” are the some of the buzzwords here.

Seeing the next-gen mindset come up against traditionalist thinking, as it did in the MRIA webinar, made me think about where I stand on these issues. As 2012 will be my 25th year in the marketing and public opinion research field, one might think that I fall naturally into the traditionalist camp. However, I try to keep an open mind and to keep abreast of the latest developments in my field. And it’s those last two words – “my field” – that are the key, not only to where I stand, but also to the divide between traditional and next-gen research.

What do I mean? Well, I have always thought of myself as an “opinion researcher.” What I sold to my clients over the years was the ability to tell them what the public’s opinion was about, well, whatever the subject of the research was. If you wanted to know what the general public thought about your potential new product, or what your customers thought about your service, or what Canadian citizens thought about your government department’s proposed policy, I could tell you that. And I put my money where my mouth is. I didn’t just say people liked your new product. I said 41% would buy it. And I sold the confidence that number was pretty accurate. What set me apart from my competitors? Well, I could tell you what people’s opinions really were, through the solid design of my study and the research instruments, and the artful analysis of the data collected.

But when I read about and see presentations of next-gen research, I see something different. They are not selling “opinion”, they are selling insight. By listening to social media (for example), they can give their clients insights into their brand that they might otherwise not get. Social media listening is more akin to observational research than it is to traditional market research. It’s a step removed from merely asking people what they think (even if it’s not quite the same as seeing what they actually do).

The other thing that I see in next-gen research that is fundamentally different from the traditional mindset is engagement. Whereas traditional research tries mightily to remain unobtrusive (stemming from its roots in scientific experimentation), next-gen research is in your face. Companies today are much more likely to see the act of research itself as a means of engaging the research participants, as well as learning from them.

Is there a middle ground? I’m not sure there is, or that there should be. What is important, I think, is that proponents of both camps realize (and communicate to their clients) what their research can AND can’t do. As long as clients have no illusions as to the strengths and limitations of the research they are buying, I think both traditional and next-gen techniques can provide valuable information.

What do you think? Comments, as always, are welcome. I’m particularly interested in hearing from next-gen researchers, as I may have misrepresented their position. I post all comments that are not spam.

Posted in Market research | Tags: marketing research, MRIA, NGMR, probability sample, social media |

Lies, damned lies and…margin of error

6th November, 2011 · Kerry Butt · Leave a comment

Most readers will have at least heard of the phrase “margin of error” (MoE). Those of us who work in the field of opinion research (my name for what is usually called “marketing research” or, in the government, “public opinion research [POR]”) have often agonized over this concept – what it actually means and how best to communicate it. I refined the margin of error reference that I wrote into my research reports over a 25-year career, ending up with something like: “The maximum margin of sampling error for this study (that is, the error attributable to interviewing a sample of the population rather than the entire population) is + 4.9%, 19 times in 20. Margins of error for subsamples of the total sample will be larger.”

Buried in this bit of gobbledygook are a number of very important concepts, which are often expressed in different ways (I make NO claim that what I used is best) and given different levels of emphasis. Three key concepts are represented by the words/phrases “maximum,” “sampling error” and “19 times in 20.”

Sampling error is the only one of these three key concepts to be explicitly defined in the MoE reference as written above. The reference defines this error as the error resulting from interviewing a sample, rather than the population. The underlying idea is that we interviewed a sample because it was impractical (for cost and other reasons) to interview the whole population, and there is an “accuracy cost” to doing this.

The word “maximum” is more arcane. It is there because the exact margin of error for any given finding in the study depends on the true proportion for that measure that exists in the population. To illustrate, image a survey that reports that 23% of the surveyed group believes X. Later in the survey, we find that 45% supports proposal Y. The MoE for these two observations differs. The calculation is: MoE =  1.96 * SQRT((p * (1-p))/N), where N is the sample size, 1.96 represents the multiplier for a 95% confidence interval (“19 times in 20”) and p is the proportion for a given measure that exists in the population. As we usually don’t know this proportion (why would we need to do a survey if we did?), we estimate it based on the survey proportion. I don’t want to turn this post into a stats class, so we won’t do these calculations here. If you do them, you will see that the MoE for the 45% finding is higher than that for the 23%. If you were to do them for every possible proportion from .01 to .99, you would see that the highest MoE is when p=.5 (or 50%). When a single MoE is cited in a research report, it is usually this one (as it is the most conservative), and it is cited (or should be) as the “maximum” MoE.

The final phrase of interest in the MoE reference is “19 times in 20.”  As I noted briefly above, this represents the 95% confidence interval (or CI). But why 95% for the CI? Why not 90%, or 80%, or 99%. Well, there is no real reason. This is an arbitrary standard. It makes sense and seems logical,  but there is no “mathematical law” that states it has to be 95%. Similarly, in significance testing (e.g., testing the likelihood that the difference between two observations is due to chance), if the p-value calculated is less than .05 (i.e, the chance that the difference being tested does NOT reflect an underlying difference is less than 5%), then we conclude that the difference is real (or “statistically significant”). Who says so? Well,  a guy named R.A. Fisher does (or, at least, did – he died in 1962).

(I should note that, while much of the analysis that opinion researchers do is based on Fisher’s work, this does NOT mean that the interpretation of these analyses is cut-and-dried. Far from it. For a fascinating discussion of the perils of Fisher-type statistical inference, see here. No, go there now – this article is a must-read. I’ll be here when you get back. Suffice it to say that erroneous conclusions can easily be drawn, even with the best of intentions, from “statistically significant” findings.)

I have found in my career that the “19 times in 20” CI is one of the most misunderstood concepts we communicate in opinion research reports. Some take it as a measure of certainty – “I am 95% sure this number is correct!”; others take it as a cop-out – “If you’re wrong, you’ll say this was the one study in 20…” Still others assume all outcomes within the stated MoE are equally likely, hence the talk of “statistical ties” that you often see in the reporting of political polls. (In fact, the survey observation is the most likely outcome within the MoE.) Adding to the confusion is the fact that the CI is explained in a number of ways in MoE statements. The one that seemed to make the most sense to my clients was this: “When we report a survey finding, the range identified by the percentage we found, +X%, will include the true population proportion in 19 of 20 samples.” In other words, there is a one-in-20 chance that the actual population proportion will lie outside this range.

So you can see that there is a lot of meaning (and potential for confusion) hidden within the short MoE statement. Given that this statement is often the ONLY statement in a report that in any way constrains the conclusions reached, I think it is in the best interests of users of research to make sure they fully understand what it means (and what it doesn’t mean).

Posted in Science | Tags: margin of error, marketing research, statistics |

Follow me on Twitter

My Tweets

Archives

  • February 2015
  • January 2015
  • October 2014
  • September 2014
  • January 2014
  • October 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
© A WordPress Site
  • About