Last Thursday afternoon, I sat in a room with some (actually far too few) other people from around the Government of Canada to watch a webinar organized by MRIA (the Marketing Research and Intelligence Association, my professional association), on the subject of mobile research. I really never intended to write too much about my professional life on this blog, but this webinar really got me thinking. Actually, it was one comment from one viewer that got me thinking.
The webinar featured six speakers from five companies active in the mobile research area (i.e., research conducted using mobile phones, apps, etc.) The presentations ranged from the blah (the presentation from Qualvu’s John Williamson was basically just a sales pitch) to the extremely interesting (Jason Cyr of Tiipz [who looks about 16 in his photo] gave a fascinating presentation on the use of mobile research to gather real-time data at events like music festivals).
But, as I mentioned, it was one question that really got me thinking. That question was, in part, about validity and the fact that none of the presenters mentioned it. Well, when this question was read out by the webinar host, you could almost hear the groans from the presenters. That’s because validity, more than perhaps any other issue, divides practitioners of what is called “next-gen” marketing research from traditional market researchers. Much has been written and said about this issue, much of it in blogs and on social media platforms. In a nutshell, traditional market researchers believe that next-gen research does not have a “scientific” foundation and, thus, its findings cannot be tied to the population as a whole with any precision. On the other hand, next-gen researchers believe traditional market research is mired in “old thinking” and just doesn’t reach large groups of people (and those people, mostly the young, are of greatest interest to many of the companies that are buying market research services).
Of course, the validity issue strikes to the heart of this divide. Proponents of traditional market research talk about “representativeness” and “probability samples” as the theoretical underpinnings of the whole market research field (at least the quantitative side of it). These underpinnings allow the results of marketing research to be generalized to the population as a whole with a known degree of confidence (see my recent post on “margin of error”). For traditional researchers, if you don’t have this, you have nothing.
On the other hand, next-gen market researchers are often dismissive of these traditional sacred cows. In response to the validity question at the MRIA webinar, one of the presenters said “just get over” not using probability samples. (This echoes the view of another of the younger generation of market researchers, who said of representative samples “forget it, we never had it.”) “Gamification”, social media “listening” and “scraping” are the some of the buzzwords here.
Seeing the next-gen mindset come up against traditionalist thinking, as it did in the MRIA webinar, made me think about where I stand on these issues. As 2012 will be my 25th year in the marketing and public opinion research field, one might think that I fall naturally into the traditionalist camp. However, I try to keep an open mind and to keep abreast of the latest developments in my field. And it’s those last two words – “my field” – that are the key, not only to where I stand, but also to the divide between traditional and next-gen research.
What do I mean? Well, I have always thought of myself as an “opinion researcher.” What I sold to my clients over the years was the ability to tell them what the public’s opinion was about, well, whatever the subject of the research was. If you wanted to know what the general public thought about your potential new product, or what your customers thought about your service, or what Canadian citizens thought about your government department’s proposed policy, I could tell you that. And I put my money where my mouth is. I didn’t just say people liked your new product. I said 41% would buy it. And I sold the confidence that number was pretty accurate. What set me apart from my competitors? Well, I could tell you what people’s opinions really were, through the solid design of my study and the research instruments, and the artful analysis of the data collected.
But when I read about and see presentations of next-gen research, I see something different. They are not selling “opinion”, they are selling insight. By listening to social media (for example), they can give their clients insights into their brand that they might otherwise not get. Social media listening is more akin to observational research than it is to traditional market research. It’s a step removed from merely asking people what they think (even if it’s not quite the same as seeing what they actually do).
The other thing that I see in next-gen research that is fundamentally different from the traditional mindset is engagement. Whereas traditional research tries mightily to remain unobtrusive (stemming from its roots in scientific experimentation), next-gen research is in your face. Companies today are much more likely to see the act of research itself as a means of engaging the research participants, as well as learning from them.
Is there a middle ground? I’m not sure there is, or that there should be. What is important, I think, is that proponents of both camps realize (and communicate to their clients) what their research can AND can’t do. As long as clients have no illusions as to the strengths and limitations of the research they are buying, I think both traditional and next-gen techniques can provide valuable information.
What do you think? Comments, as always, are welcome. I’m particularly interested in hearing from next-gen researchers, as I may have misrepresented their position. I post all comments that are not spam.