Quantitative vs. qualitative research

quantitative vs qualitative research


So this much is obvious: quantitative research uses numbers and statistics to draw conclusions about large populations. You count something that is countable, and process results across the sample.

 

Qualitative methods are more elusive: however in general they revolve around collecting data from people about an experience. This could be how they used a service, how they felt about something, and could be verbal or written. But it is generally speech or talk, albeit with a variety of levels inferred above and below this (if they are truthful, or if what they say has deeper or hidden meaning). Rather than applying a statistical test to the data, a qualitative researcher must read/listen to the data and make an interpretation of what is being discussed, often hoping to discover patterns or contradictions.

 

Interpretation is done in both approaches: quantitative results are still examined in context (often compared with other numbers and data), and given a metric of significance such as a p-value or r-squared to assess if the results, or a part of them, are meaningful.

 

Finally, in general it is seen that a quantitative approach is a positivist paradigm, while qualitative methods fit better with constructionist or pragmatic paradigms (Savin-Baden and Major 2013). However, both are essentially attempting to model and sample something about the world in which we live so that we can simplify and understand it. And it’s not a case of one is better than the other: just like a hammer can’t be used to turn screws, or a screwdriver to hammer in nails, the different methods have different uses. Researchers should always make sure that the question comes first, and that is used to choose the methodology.

 

But you should also ask, is there a quantitative way to measure what your question is asking? If it’s something as simple as numbers of people, or a quantitative aspect like salary. While there are also quantitative measures of things like anxiety or pain that can be used as a proxy to make inferences across a large population. However, for detailed understanding of these issues and how they affect people, these metrics can be crude and don’t get to the detail of the lived experience.

 

However, choosing the right approach also depends on the how much is known about the research question and topic area. If you don’t know what the problems are in a field, you don’t know what questions to ask, or how to record the the answers.

 

I would argue that even in the physical sciences, qualitative research comes first, and sets questions to answer with quantitative methods. Quantitative research projects usually grow from qualitative observations of the physical world, such as 'I can see that ice seems to melt when it gets warm. At what temperature does ice melt?' or qualitative exploration of the existing literature to find things from other research that is surprising or unexplained.

 

In the classic high-school science experiment above, you would quantitatively measure the melting point of water by taking a sample. You don't try and melt all the ice in the world: you take one piece, and assume that other ice behaves in the same way. In both quantitative and qualitative research, sampling correctly is important. Just as only taking one small piece of impure ice will give you skewed results, so will only sampling one small part of a human population.

 

In quantitative research, because you are usually only sampling for one question at a time (i.e. temperature) it's best to have a large sample size. Especially when dealing with naturally variable, unrestricted variables (for example like a person's height) the data will tend to form a bell curve with a large majority of the answers in the middle, and a small number of outliers at either end. If we were sampling ice to melt, we might find that most ice melts around the same temperature, but very pure or dirty ice will have a slight difference. We would take the answer to be the statistical average, for the mean by adding up all the results and dividing by the sample size.

 

You could argue that the same is true for qualitative research. If you are asking people about their favourite ice cream, you'll get a better answer by asking a large number of people, right? Well this might not always be true. Firstly, just as with the ice melting experiment, sampling every piece of ice in the world will not add much more accuracy to your work but will be a lot more work. And with qualitative research, you are generally asking a much more complicated question for each person sampled, so the work increases exponentially as your sample size grows.

 


As your qualitative data grows, Quirkos can help you manage and make sense of it...

 

Remember, it's rare that qualitative research aims to give one definitive answer: it's more exploratory, and interested in the outlier cases just as much as the common ones. So in our qualitative research question 'What is your favourite ice cream' people may talk about gelato, sorbet or iced coffee. Are these really ice cream? One could argue that technically they are not, but if people consider them to be ice cream, and we want to know what to sell for desert at our restaurant, this becomes relevant. As a result of qualitative research, we usually learn to ask better questions 'What is your favourite frozen dessert?' might be a better question.

 

Now our qualitative research has helped us create a good piece of quantitative research. We can do a survey with a large sample size, and ask the question 'What is your favourite frozen dessert?' and give a list of options which are the most common answers from our qualitative research.

 

However, there can still be flaws with this approach. When answering a survey people don't always say what they mean, and you lose the context of their answers. In surveys there is primacy effect which means that people are lazy, and much more likely to tick the first answer in a list. In this case, the richness of our qualitative answers are lost. We don't know what context people are talking about (while walking along a beach, or in a restaurant or at home?) and we also loose the direct contact with the respondent so we can tell if they are lying or being sarcastic, and we can't ask follow on questions.

 

That's why qualitative research can still be useful as part of, or following quantitative research, for discovering ‘Why’ – understanding the results in the richness of lived experience. Often research projects will have a qualitative component – taking a subset of the the larger quantitative study and getting an in-depth qualitative insight with them.

 

There’s no shame in using a mixed methods approach if it is the most appropriate for the subject area. While there is often criticism over studies that ‘tack-on’ a small qualitative component, and don’t properly integrate or triangulate the types of results, this is a implementation rather than paradigm problem. But remember, it’s not a case of one approach vs another, there are no inheriently good or bad approaches. Methods should be appropriate to each task/question and should be servants to the researcher, not ruling them (Silverman 2013).

 

Quirkos is about as close as a pure qualitative software package as you can find. It's quick to learn, visual and keeps you close to the data. Our focus is on just doing qualitative coding and analysis well, and not to attempt  statistical analysis of qualitative data. We believe that for most qualitative researchers that's the right methodological approach. However, there is capacity for allow some mixed method analysis, so that you can filter results by demographic or other data.

 

The best way to see if Quirkos works for you is to give it a go! Download our one month free trial of the full version with no restrictions, and see if Quirkos works for your research paradigm.

 

 

Engaging qualitative research with a quantitative audience.

graphs of quantiatative data in media

 

The last two blog post articles were based on a talk I was invited to give at ‘Mind the Gap’, a conference organised by MDH RSA at the University of Sheffield. You can find the slides here, but they are not very text heavy, so don’t read well without audio!

 

The two talks which preceded me, by Professors Glynis Cousin and John Sandars, echoed quite a few of the themes. Professor Cousin spoke persuasively about reductionism in qualitative research, in her talk on the ‘Science of the Singular’ and the significance that can be drawn from a single case study. She argued that by necessity all research is reductive, and even ‘fictive’, but that doesn’t restrict what we can interpret from it.

 

Professor Cousin described how both Goffman (1961) and Kenkessie (1962) did extensive ethnographies on mental asylums about the same time, but one wrote a classic academic text, and the other the ‘fictive’ novel, One Flew Over the Cuckoo’s Nest. One could argue that both were very influential, but the different approaches in ‘writing-up’ appeal to different audiences.

 

That notion of writing for your audience was evident in Professor Sanders talk, and his concern for communications methods that have the most impact. Drawing from a variety of mixed-method research projects in education, he talked about choosing a methodology that has to balance the approach the researcher desires in their heart, with what the audience will accept. It is little use choosing an action-research approach if the target audience (or journal editors) find it inappropriate in some way.

 

This sparked some debate about how well qualitative methods are accepted in mainstream journals, and if there is a preference towards publishing research based on quantitative methods. Some felt that authors felt an obligation to take a defensive stance when describing qualitative methods, further restricting the limited word limits that cut down so much detail in qualitative dissemination. The final speaker, Dr Kiera Barlett also touched on this issue when discussing publications strategies for mixed-method projects. Should you have separate qualitative and quantitative papers for respective journals, or try and have publications that draw from all aspects of the study? Obviously this will depend on the field, findings and methods chosen, but it again raised a difficult issue.

 

Is it still the case that quantitative findings have more impact than qualitative ones? Do journal articles, funders and decision makers still have a preference for what are seen as more traditional statistical based methodologies? From my own anecdotal position I would have to agree with most of these, although to be fair I have seen little evidence of funding bodies (at least in the UK and in social sciences and health) having a strong preference against qualitative methods of inquiry.

 

However, during the discussion at the conference it was noted that the preference for ‘traditional’ methods is not just restricted to journal reviewers but the culture of disciplines at large. This is often for good reason, and not restricted to a qualitative/quantitative divide: particular techniques and statistical tests tend to dominate, partly because they are well known. This has a great advantage: if you use a common indicator or test, people probably have a better understanding of the approach and limitations, so can interpret the results better, and compare with other studies. With a novel approach, one could argue that readers also need to also go and read all the references in the methodology section (which they may or may not bother to do), and that comparisons and research synthesis are made more difficult.

 

As for journal articles, participants pointed out that many online and open-access journals have removed word limits (or effectively done so by allowing hyperlinked appendices), making publication of long text based selections of qualitative data easier. However, this doesn’t necessarily increase palatability, and that’s why I want to get back to this issue about considering the audience for research findings, and choosing an appropriate medium.

 

It may be easy to say that if research is predominantly a quantitative world, quantifying, summarising, and statistically analysing qualitative data is the way to go. But this is abhorrent, not just to the heart of a qualitative researcher, but also deceptive - imparting a quantitative fiction on a qualitative story. Perhaps the challenge is to think of approaches outside the written journal article. If we can submit a graphic novel as a PhD or explain your research as a dance we can reach new audiences, and engage in new ways with existing ones.

 

Producing graphs, pie charts, and even the bubble views in Quirkos are all ways that essentially summarise, quantify and potentially trivialise qualitative data. But if this allows us to access a wider audience used to quantitative methods, it may have a valuable utility, at least in providing that first engagement that makes a reader want to look in more detail. In my opinion, the worst research is that which stays unread on the shelf.

 

 

How to set up a free online mixed methods survey

It’s quick and easy to set up an on-line survey to collect feedback or research data in a digital format that means you can quickly get straight to analysing the data. Unfortunately, most packages like SurveyMonkey, SurveyGizmo and Kwiksurveys, while all compatible with Quirkos, require a paying subscription before you can actually export any of your data and analyse it.

 

However, there are two great free platforms we recommend that allow you to run a mixed-method survey, and easily bring all your data into Quirkos to explore and analyse. In this article, we'll go through a step by step guide to setting up a survey in eSurv, and exporting the data to Quirkos

 

eSurv.org

This is a completely free platform, funded by contributions from Universities, but is available for any use. There are no locked features, or restrictions on responses, and it has an easy to use on-line survey design. There are customisable templates, and you can have custom exit pages too.

Once you have signed up for an account, you will be presented with the screen above, and will be able to get going with your first survey. The first page allows you to name the survey, and set up the title and page description, all have options for changing the text formatting. Just make sure you click on the verification link in the e-mail sent to you, which will allow you to access all the features.

 

The next screen shows a series of templates you can use to set the style of your survey. Choose one that you like the look of, and you have the option of customising it further with your logo or other colour schemes. Click next.

Now you are ready to start adding questions.

 

The options box on the right shows all the different types of questions available, and each one has many customisation options at the bottom of the screen. For example, the single text box option can be made to accept only numerical answers, and you can change the maximum length and display size of the box. All questions can be made mandatory, with a custom 'warning' if someone does not fill in that dialogue.

 

The drag and drop ranking feature is a nice option, and pretty much all the multiple-choice and closed question formats you might want are represented.

 

When you have chosen the title and settings for each question, you can click on the 'Save & Add Next' button on the top right to quickly add a series of questions, or 'Save & Close' if you are done.

 

There are also Logic options to add certain questions only in response to certain answers (for example, Please tell us why you didn't like this product). It is of course possible to edit the questions and rearrange them using the drag icon in the main questionnaire overview.

 

You can test the survey to see how it looks, and when happy click the launch button to make it available to respondents. This also gives you a QR code linking to the survey, allowing smartphone users to complete the survey from a link on posters or printed documents. While you can customise the link title, the web address is always in the format of "https://eSurv.org?u=survey_title".

 

You can have a large number of surveys on the go at once, and manage them all from the 'Home' screen, which also shows you how many responses you have had.

 

Once you are ready to analyse your data, open the survey and click on the export button. This gives the options above to select which questions and respondents you want to export, and a date range (useful if you only want to put in new responses). For best use in Quirkos, select the Compact and .csv File format options, and then click download.

 

exported csv file in excel

The only step you probably want to take before bringing the data into Quirkos is to remove the first row (highlighted above). By default eSurv creates a row which numbers the questions, but it’s usually easier to have the questions themselves as the title, not just the number. Just delete the first row starting with ‘Question’ and this will remove the question numbers, and Quirkos will see the first row with the actual question names. Just save any changes in Excel/LibreOffice making sure you save using the CSV (Comma delimited) format, and ignore the warning that ‘some features may be lost’ and choose ‘Yes’ to keep using that format. You can also remove any columns here that you don’t want (for example e-mail address if it was not provided) but you can also do this in Quirkos.

 

In Quirkos, start a new Structured Questions project, and select the Import from CSV option from the bottom right 'Add Source' (+) button. Select the file you saved in the previous step, and you will get a preview of the data looking like the screenshot above. Here you have the option to choose which question you want to use for the Source Title (say a name, or respondent ID) and any you might want to ignore, such as IP address. Then make sure that open ended questions are selected as Question, and Property is associated with any discrete or numerical categories. Click import, and voilà!

 

Should you get new responses, you can add them in the same way to an existing project with the same structure, just make sure when exporting from eSurv that you select the newest responses to export, and don't duplicate older ones.

 

Now you can use Quirkos to go through and code any of the qualitative text elements, while using the properties and quantitative data  to compare respondents and generate summaries. So for example, you can see the comments that people with negative ratings made side by side by comments from positive feedback, or compare people from different age ranges.

 

If you need even more customisation of your survey, the open-source platform LimeSurvey, while not as easy to use as eSurv, gives you a vast array of customisability options. LimeService.com allows 25 responses a month for free, but we have our own unrestricted installation available free of charge to our customers – just ask if you need it!

 

p.s  I've also done a video tutorial covering setting up and using eSurv, and exporting the results into Quirkos.

Why qualitative research?

There are lies, damn lies, and statistics

It’s easy to knock statistics for being misleading, or even misused to support spurious findings. In fact, there seems to be a growing backlash at the automatic way that significance tests in scientific papers are assumed to be the basis for proving findings (an article neatly rebutted here in the aptly named post “Give p a chance!”). However, I think most of the time statistics are actually undervalued. They are extremely good at conveying succinct summaries about large numbers of things. Not that there isn’t room for more public literacy about statistics, a charge that can be levied at many academic researchers too.

But there is a clear limit to how far statistics can take us, especially when dealing with complex and messy social issues. These are often the result of intricately entangled factors, decided by fickle and seemingly irrational human beings. Statistics can give you an overview of what is happening, but they can’t tell you why. To really understand the behaviour and decisions of an individual, or a group of actors, we need to get an in-depth knowledge: one data point in a distribution isn’t going to be enough power.

Sometimes, to understand a public health issue like obesity, we need to know about everything from supermarket psychology that promotes unhealthy food, to how childhood depression can be linked with obesity. When done well, qualitative research allows us to look across societal and personal factors, integrating individuals stories into a social narrative that can explain important issues.

To do this, we can observe the behaviour of people in a supermarket, or interview people about their lives. But one of the key factors in some qualitative research, is that we don’t always know what we are looking for. If we explicitly go into a supermarket with the idea that watching shoppers will prove that supermarket two-for-one offers are causing obesity, we might miss other issues: the shelf placement of junk food, or the high cost of fresh vegetables. In the same way, if we interview someone with set questions about childhood depression, we might miss factors like time needed for food preparation, or cuts to welfare benefits.

This open ended, sometimes called ‘semi-structured’, or inductive analytical approach is one of the most difficult, but most powerful methods of qualitative research. Collecting data first, and then using grounded theory in the analytic phase to discover underlying themes from which can build hypotheses, sometimes seems like backward thinking. But when you don’t know what the right questions are, it’s difficult to find the right answers.

More on all this soon…