Kathryn Korostaff posted the below article in thoughts related to criticism around Dr. Amy Cuddy’s power poses research. I suggest reading the link below and the NYT article (linked inside the below article) before continuing as I assume at least a passing familiarity with the topics discussed in each.
The New York Times Article on Amy Cuddy: Is Survey Research Next?
I see three key issues identified with Dr. Cuddy’s study in the article that Kathryn wrote about:
- Support of the Rationale through Prior Literature. P-curve analyses that indicate the prior research cited was weak. That means the source material buttressing Carney, Cuddy, and Yap’s (2010) hypotheses is suspect.
- Sample Sizes. Carney, Cuddy, and Yap (2010) used a standard sample methodology common to social psychology (20-25 participants per cell). Subsequent studies used larger sample sizes and didn’t replicate fully. This suggests the original work was statistical noise and found by luck.
- Demand bias. The respondents knew what the researchers were after and complied. The hormonal impacts were not replicated but the emotional aspects were. However, my takeaway of the article was that both Cuddy and Ranehill found the impact on how the participants felt and both downplayed it.
Let’s take a look at how they apply to marketing research:
- Prior Literature: Market research is itself an applied science in that we are taking knowledge of more basic research to address specific business objectives. We don’t have the same sorts of issues in hypothesis development that Carney, Cuddy, and Yan (2010) possessed. These objectives don’t exist as theoretical constructs derived from supporting literature; instead, they are (ideally) concrete objectives about needed learnings and insights(e.g., how to expand user base by…). Additionally, the business objectives and the approach we take also may dictate the analysis we conduct (and minimize p-hacking via inferential statistics).
- Sample Sizes. This is an area where we can have huge issues, especially when we start tabulating by demos, psychographic segments, and other factors. We’ve all had that client that wants to know why men aged 25 to 27 who are in Segment X don’t like our product (where the N = 8). Or we want to get a read on our target segments. Or worse yet, taking qualitative findings as hard facts rather than the directional assessments they are. To quote Mad-Eye Moody: constant vigilance! [is required]
- Demand bias.Or another way to put it, the tallest kid in 3rd grade. Have we presented the options in such a way that our respondents (or participants) get a feeling for what we want them to say? This can be a huge issue. Qualitative moderators (of which I’m one) always give a big spiel at the beginning about “no right or wrong answers” and “being open and honest” partly to alleviate this concern. Have we sufficiently masked our client’s interest in the study so as not to bias people one way or the other? Again…we have to be vigilant.
So does consumer insights have a replicability issue? Maybe. But not in the same way as basic research in psychology, biology, and other fields where p-hacking can occur might.
We are trying to understand very discrete and concrete hypotheses about consumer reactions. We use relatively large sample sizes (at least at the topline). And, we (in some cases) try to mitigate demand bias by asking people for the honest reactions or masking our intent.
Our issues have more to do with sample framing, precise question wording, and how people are exposed to our stimuli. Marketing research and insight development is very much observational science and less controlled than social psychology. We try to keep our work as clean and unconfounded as possible but our results can vary for a variety of reasons since we (mostly) cannot control all aspects of the research experience for our respondents and group participants.
We’re not trying to be precisely correct but rather pointing our clients in the overall best direction to go. The old adage “All models are wrong but some are useful” applies strongly to everything we do.
By no means, am I blasé about issues around replicability in market research though. If we are not careful, we could find ourselves delivering insights based on spurious results. We have to constantly interrogate our methods and our data. As business professionals (in addition to being researchers), we have to justify our clients’ and stakeholders’ trust in us and what we deliver.
3 Comments
Mozelle Broadhurst · July 17, 2019 at 8:33 AM
Interesting post , keep up the good work. Have a great day.
Hairstyles · December 7, 2019 at 4:16 AM
Very nice post and straight to the point. I am not sure if this is truly the best place to ask but do you guys have any thoughts on where to hire some professional writers? Thanks 🙂
Johnnie Kraker · February 29, 2020 at 6:29 AM
Hey there, I think your site might be having browser compatibility issues. When I look at your blog in Ie, it looks fine but when opening in Internet Explorer, it has some overlapping. I just wanted to give you a quick heads up! Other then that, excellent blog!