I once had a very interesting conversation at a MRS event with a market researcher from a major media company. He told me that they were increasingly ‘costing-out’ the qualitative open-ended questions from customer surveys because they were too expensive and time consuming to analyse. Increasingly they were replacing open-ended questions with a series of Likert scale questions which could be automatically and statistically examined.
I hear similar arguments a lot, and I totally understand the sentiment: doing good qualitative research is expensive, and requires good interpretation. However, it’s just as possible to do statistical analysis poorly, and come up with meaningless and inaccurate answers. For example, when working with Likert scales, you have to be careful about which parametric tests you use, and make sure that the data is normally distributed (Sullivan and Artino 2013).
There is evidence that increasing the number of options in closed questions does not significantly change the responses participants share (Dawes 2008), so if you need a good level of nuance into customer perceptions, why not let your users choose their own words. “Quick Qual” approaches, like asking people to use one word to describe the product or their experience can be really illuminating. Better yet, these responses are easy to analyse, and present as an engaging word cloud!
Even when you have longer responses, it’s not necessary to always take a full classification and quantification approach to qualitative survey data such as in Nardo (2003). For most market research investigations, this level of detail is not needed by researcher or client.
Indeed, you don’t need to do deep analysis of the data to get some value from it. A quick read through some of the comments can make sure your questions are on track, and there aren’t other common issues being raised. It helps check you were asking the right questions, and can help explain why answers for some people aren’t matching up with the rest. As ever, qualitative data is great for surprises, responses you hadn’t thought of, and understanding motivations.
Removing open ended questions means you can’t provide nice quotes or verbatims from the feedback, which are great for grounding a report and making it come to life. If you have no quotes from respondents, you also are missing the opportunity to create marketing campaigns around comments from customer evangelists, something Lidl UK has done well by featuring positive Tweets about their brand. In this article marketing director Claire Farrant notes the importance of listening and engaging with customer feedback in this way. It can also make people more satisfied with the feedback process if they have a chance to voice their opinions in more depth.
I think it’s also vital to include open-ended questions when piloting a survey or questionnaire. Having qualitative data at an early stage can let you refine your questions, and the possible responses. Sometimes the language used by respondents is important to reflect when setting closed questions: you don’t want to be asking questions like “How practical did you find this product” when the most common term coming from the qualitative data is “Durable”. It’s not always necessary to capture and analyse qualitative data for thousands of responses, but looking at a sample of a few dozen or hundred can show if you are on the right track before a big push.
You also shouldn’t worry too much about open-ended surveys having lower completion rates. A huge study by SurveyMonkey found that a single open question actually increased engagement slightly, and only when there were 5 or more open-ended response boxes did this have a negative impact on completion.
Finally, without qualitative responses, you lose the ability to triangulate and integrate your qualitative and quantitative data: one of the most powerful tools in survey analysis. For example, in Quirkos it is trivial to do very quick comparative subset analysis, using any of the closed questions as a pivot point. So you can look at the open ended responses from people who gave high satisfaction scores next to those that were low, and rather than then being stuck trying to explain the difference in opinion, you can look at the written comments to get an insight into why they differ.
And I think this is key to creating good reports for clients. Usually, the end point for a customer is not being told that 83% of their customers are satisfied with their helpline: they want to actions that will improve or optimise delivery. What exactly was the reason 17% of people had a bad experience? It’s all very well to create an elaborate chain of closed questions, such as ‘You said you were unsatisfied. Which of these reasons bests explains this? You said the response time made you unsatisfied. How long did you wait? 0-3min, 3-5min etc. etc. But these types of surveys are time consuming to program and make comprehensive, and sometimes just allowing someone to type “I had to wait more than 15 minutes for a response” would have given you all the data you needed on a critical point.
The depth and insight from qualitative data can illuminate differences in respondent’s experiences, and give the key information to move things forward. Instead of thinking how can you cost-out qualitative responses, think instead how you can make sure they are integrated to provide maximum client value! A partnership between closed and open questions is usually the most powerful way to get both a quick summary and deep insight into complex interactions, and there is no need to be afraid of the open box!
Quirkos is designed to make it easy to bring both qualitative and quantitative data from surveys together, and use the intuitive visual interface to explore and play with market research data. Download a free trial of our qualitative analysis software, or contact us for a demo, and see how quickly you can step-up from paper based analysis into a streamlined and insightful MRX workflow!